Mar 18 13:05:41.291619 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 13:05:41.880448 master-0 kubenswrapper[3938]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:05:41.880448 master-0 kubenswrapper[3938]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 13:05:41.880448 master-0 kubenswrapper[3938]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:05:41.880448 master-0 kubenswrapper[3938]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:05:41.880448 master-0 kubenswrapper[3938]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 13:05:41.880448 master-0 kubenswrapper[3938]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:05:41.881427 master-0 kubenswrapper[3938]: I0318 13:05:41.881115 3938 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 13:05:41.888498 master-0 kubenswrapper[3938]: W0318 13:05:41.888402 3938 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:05:41.888498 master-0 kubenswrapper[3938]: W0318 13:05:41.888459 3938 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:05:41.888498 master-0 kubenswrapper[3938]: W0318 13:05:41.888469 3938 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:05:41.888498 master-0 kubenswrapper[3938]: W0318 13:05:41.888478 3938 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:05:41.888498 master-0 kubenswrapper[3938]: W0318 13:05:41.888486 3938 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:05:41.888498 master-0 kubenswrapper[3938]: W0318 13:05:41.888495 3938 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:05:41.888498 master-0 kubenswrapper[3938]: W0318 13:05:41.888504 3938 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:05:41.888498 master-0 kubenswrapper[3938]: W0318 13:05:41.888511 3938 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:05:41.888498 master-0 kubenswrapper[3938]: W0318 13:05:41.888520 3938 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888528 3938 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888536 3938 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888546 3938 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888556 3938 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888564 3938 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888572 3938 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888580 3938 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888588 3938 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888595 3938 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888603 3938 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888611 3938 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888618 3938 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888626 3938 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888633 3938 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888641 3938 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888669 3938 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888677 3938 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888686 3938 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888694 3938 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:05:41.889018 master-0 kubenswrapper[3938]: W0318 13:05:41.888703 3938 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888710 3938 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888719 3938 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888726 3938 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888734 3938 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888742 3938 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888750 3938 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888757 3938 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888765 3938 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888772 3938 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888780 3938 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888788 3938 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888796 3938 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888803 3938 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888812 3938 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888819 3938 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888829 3938 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888837 3938 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888844 3938 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888852 3938 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:05:41.890035 master-0 kubenswrapper[3938]: W0318 13:05:41.888860 3938 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.888868 3938 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.888876 3938 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.888884 3938 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.888891 3938 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.888899 3938 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.888907 3938 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.888915 3938 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.888927 3938 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.888963 3938 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.888974 3938 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.888986 3938 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.888994 3938 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.889005 3938 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.889017 3938 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.889026 3938 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.889034 3938 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.889042 3938 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:05:41.890906 master-0 kubenswrapper[3938]: W0318 13:05:41.889050 3938 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: W0318 13:05:41.889059 3938 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: W0318 13:05:41.889066 3938 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: W0318 13:05:41.889074 3938 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: W0318 13:05:41.889082 3938 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: W0318 13:05:41.889090 3938 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890454 3938 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890482 3938 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890497 3938 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890509 3938 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890520 3938 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890529 3938 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890542 3938 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890553 3938 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890562 3938 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890571 3938 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890584 3938 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890594 3938 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890604 3938 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890613 3938 flags.go:64] FLAG: --cgroup-root="" Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890622 3938 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890631 3938 flags.go:64] FLAG: --client-ca-file="" Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890639 3938 flags.go:64] FLAG: --cloud-config="" Mar 18 13:05:41.891720 master-0 kubenswrapper[3938]: I0318 13:05:41.890648 3938 flags.go:64] FLAG: --cloud-provider="" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890657 3938 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890669 3938 flags.go:64] FLAG: --cluster-domain="" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890677 3938 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890687 3938 flags.go:64] FLAG: --config-dir="" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890696 3938 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890707 3938 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890717 3938 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890727 3938 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890736 3938 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890745 3938 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890755 3938 flags.go:64] FLAG: --contention-profiling="false" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890764 3938 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890773 3938 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890782 3938 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890791 3938 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890802 3938 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890811 3938 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890820 3938 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890828 3938 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890837 3938 flags.go:64] FLAG: --enable-server="true" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890847 3938 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890857 3938 flags.go:64] FLAG: --event-burst="100" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890867 3938 flags.go:64] FLAG: --event-qps="50" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890876 3938 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 13:05:41.893125 master-0 kubenswrapper[3938]: I0318 13:05:41.890887 3938 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.890896 3938 flags.go:64] FLAG: --eviction-hard="" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.890907 3938 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.890916 3938 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.890927 3938 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.890968 3938 flags.go:64] FLAG: --eviction-soft="" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.890979 3938 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.890989 3938 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.890998 3938 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.891006 3938 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.891015 3938 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.891024 3938 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.891033 3938 flags.go:64] FLAG: --feature-gates="" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.891044 3938 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.891053 3938 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.891062 3938 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.891072 3938 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.891082 3938 flags.go:64] FLAG: --healthz-port="10248" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.891090 3938 flags.go:64] FLAG: --help="false" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.891100 3938 flags.go:64] FLAG: --hostname-override="" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.891109 3938 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.891118 3938 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.891128 3938 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.891136 3938 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.891145 3938 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 13:05:41.894468 master-0 kubenswrapper[3938]: I0318 13:05:41.891154 3938 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891164 3938 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891173 3938 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891182 3938 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891191 3938 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891200 3938 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891209 3938 flags.go:64] FLAG: --kube-reserved="" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891220 3938 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891232 3938 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891244 3938 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891255 3938 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891266 3938 flags.go:64] FLAG: --lock-file="" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891276 3938 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891285 3938 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891295 3938 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891322 3938 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891331 3938 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891340 3938 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891349 3938 flags.go:64] FLAG: --logging-format="text" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891358 3938 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891368 3938 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891377 3938 flags.go:64] FLAG: --manifest-url="" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891388 3938 flags.go:64] FLAG: --manifest-url-header="" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891400 3938 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891409 3938 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 13:05:41.895608 master-0 kubenswrapper[3938]: I0318 13:05:41.891420 3938 flags.go:64] FLAG: --max-pods="110" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891429 3938 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891439 3938 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891448 3938 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891457 3938 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891466 3938 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891475 3938 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891484 3938 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891504 3938 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891513 3938 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891523 3938 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891532 3938 flags.go:64] FLAG: --pod-cidr="" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891540 3938 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891554 3938 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891565 3938 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891574 3938 flags.go:64] FLAG: --pods-per-core="0" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891583 3938 flags.go:64] FLAG: --port="10250" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891592 3938 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891601 3938 flags.go:64] FLAG: --provider-id="" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891610 3938 flags.go:64] FLAG: --qos-reserved="" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891619 3938 flags.go:64] FLAG: --read-only-port="10255" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891628 3938 flags.go:64] FLAG: --register-node="true" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891637 3938 flags.go:64] FLAG: --register-schedulable="true" Mar 18 13:05:41.896852 master-0 kubenswrapper[3938]: I0318 13:05:41.891646 3938 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891661 3938 flags.go:64] FLAG: --registry-burst="10" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891669 3938 flags.go:64] FLAG: --registry-qps="5" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891679 3938 flags.go:64] FLAG: --reserved-cpus="" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891688 3938 flags.go:64] FLAG: --reserved-memory="" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891700 3938 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891710 3938 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891719 3938 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891729 3938 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891738 3938 flags.go:64] FLAG: --runonce="false" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891747 3938 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891757 3938 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891766 3938 flags.go:64] FLAG: --seccomp-default="false" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891776 3938 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891785 3938 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891794 3938 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891803 3938 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891813 3938 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891822 3938 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891830 3938 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891839 3938 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891848 3938 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891857 3938 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891866 3938 flags.go:64] FLAG: --system-cgroups="" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891875 3938 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 13:05:41.898320 master-0 kubenswrapper[3938]: I0318 13:05:41.891888 3938 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: I0318 13:05:41.891897 3938 flags.go:64] FLAG: --tls-cert-file="" Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: I0318 13:05:41.891907 3938 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: I0318 13:05:41.891918 3938 flags.go:64] FLAG: --tls-min-version="" Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: I0318 13:05:41.891927 3938 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: I0318 13:05:41.891962 3938 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: I0318 13:05:41.891972 3938 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: I0318 13:05:41.891981 3938 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: I0318 13:05:41.891991 3938 flags.go:64] FLAG: --v="2" Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: I0318 13:05:41.892002 3938 flags.go:64] FLAG: --version="false" Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: I0318 13:05:41.892014 3938 flags.go:64] FLAG: --vmodule="" Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: I0318 13:05:41.892024 3938 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: I0318 13:05:41.892034 3938 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: W0318 13:05:41.892297 3938 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: W0318 13:05:41.892314 3938 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: W0318 13:05:41.892333 3938 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: W0318 13:05:41.892344 3938 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: W0318 13:05:41.892352 3938 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: W0318 13:05:41.892360 3938 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: W0318 13:05:41.892368 3938 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: W0318 13:05:41.892376 3938 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: W0318 13:05:41.892383 3938 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: W0318 13:05:41.892391 3938 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:05:41.899662 master-0 kubenswrapper[3938]: W0318 13:05:41.892399 3938 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892407 3938 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892414 3938 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892422 3938 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892430 3938 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892438 3938 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892445 3938 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892453 3938 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892460 3938 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892468 3938 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892475 3938 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892484 3938 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892492 3938 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892503 3938 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892512 3938 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892520 3938 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892528 3938 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892536 3938 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892543 3938 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892551 3938 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:05:41.901073 master-0 kubenswrapper[3938]: W0318 13:05:41.892559 3938 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892570 3938 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892580 3938 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892589 3938 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892600 3938 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892608 3938 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892616 3938 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892627 3938 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892635 3938 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892643 3938 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892651 3938 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892659 3938 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892667 3938 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892674 3938 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892682 3938 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892689 3938 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892698 3938 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892706 3938 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892713 3938 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:05:41.902329 master-0 kubenswrapper[3938]: W0318 13:05:41.892721 3938 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892728 3938 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892736 3938 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892744 3938 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892751 3938 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892760 3938 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892767 3938 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892775 3938 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892783 3938 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892790 3938 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892798 3938 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892805 3938 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892813 3938 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892821 3938 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892829 3938 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892837 3938 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892847 3938 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892861 3938 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892870 3938 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892879 3938 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:05:41.903303 master-0 kubenswrapper[3938]: W0318 13:05:41.892889 3938 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:05:41.904435 master-0 kubenswrapper[3938]: W0318 13:05:41.892899 3938 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:05:41.904435 master-0 kubenswrapper[3938]: W0318 13:05:41.892909 3938 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:05:41.904435 master-0 kubenswrapper[3938]: I0318 13:05:41.892962 3938 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:05:41.906213 master-0 kubenswrapper[3938]: I0318 13:05:41.906128 3938 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 13:05:41.906213 master-0 kubenswrapper[3938]: I0318 13:05:41.906199 3938 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 13:05:41.906356 master-0 kubenswrapper[3938]: W0318 13:05:41.906332 3938 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:05:41.906356 master-0 kubenswrapper[3938]: W0318 13:05:41.906354 3938 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:05:41.906473 master-0 kubenswrapper[3938]: W0318 13:05:41.906366 3938 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:05:41.906473 master-0 kubenswrapper[3938]: W0318 13:05:41.906375 3938 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:05:41.906473 master-0 kubenswrapper[3938]: W0318 13:05:41.906387 3938 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:05:41.906473 master-0 kubenswrapper[3938]: W0318 13:05:41.906401 3938 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:05:41.906473 master-0 kubenswrapper[3938]: W0318 13:05:41.906410 3938 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:05:41.906473 master-0 kubenswrapper[3938]: W0318 13:05:41.906419 3938 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:05:41.906473 master-0 kubenswrapper[3938]: W0318 13:05:41.906428 3938 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:05:41.906473 master-0 kubenswrapper[3938]: W0318 13:05:41.906436 3938 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:05:41.906473 master-0 kubenswrapper[3938]: W0318 13:05:41.906445 3938 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:05:41.906473 master-0 kubenswrapper[3938]: W0318 13:05:41.906454 3938 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:05:41.906473 master-0 kubenswrapper[3938]: W0318 13:05:41.906462 3938 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:05:41.906473 master-0 kubenswrapper[3938]: W0318 13:05:41.906470 3938 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:05:41.906473 master-0 kubenswrapper[3938]: W0318 13:05:41.906478 3938 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:05:41.906473 master-0 kubenswrapper[3938]: W0318 13:05:41.906487 3938 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:05:41.906473 master-0 kubenswrapper[3938]: W0318 13:05:41.906495 3938 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906503 3938 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906511 3938 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906522 3938 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906531 3938 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906540 3938 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906549 3938 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906558 3938 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906567 3938 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906575 3938 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906584 3938 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906592 3938 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906601 3938 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906608 3938 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906616 3938 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906624 3938 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906632 3938 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906640 3938 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906650 3938 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:05:41.907338 master-0 kubenswrapper[3938]: W0318 13:05:41.906658 3938 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906668 3938 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906678 3938 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906686 3938 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906694 3938 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906701 3938 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906709 3938 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906717 3938 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906725 3938 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906733 3938 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906740 3938 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906748 3938 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906756 3938 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906763 3938 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906771 3938 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906782 3938 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906790 3938 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906797 3938 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906805 3938 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:05:41.908348 master-0 kubenswrapper[3938]: W0318 13:05:41.906813 3938 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:05:41.909318 master-0 kubenswrapper[3938]: W0318 13:05:41.906820 3938 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:05:41.909318 master-0 kubenswrapper[3938]: W0318 13:05:41.906828 3938 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:05:41.909318 master-0 kubenswrapper[3938]: W0318 13:05:41.906836 3938 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:05:41.909318 master-0 kubenswrapper[3938]: W0318 13:05:41.906843 3938 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:05:41.909318 master-0 kubenswrapper[3938]: W0318 13:05:41.906851 3938 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:05:41.909318 master-0 kubenswrapper[3938]: W0318 13:05:41.906858 3938 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:05:41.909318 master-0 kubenswrapper[3938]: W0318 13:05:41.906866 3938 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:05:41.909318 master-0 kubenswrapper[3938]: W0318 13:05:41.906874 3938 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:05:41.909318 master-0 kubenswrapper[3938]: W0318 13:05:41.906882 3938 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:05:41.909318 master-0 kubenswrapper[3938]: W0318 13:05:41.906889 3938 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:05:41.909318 master-0 kubenswrapper[3938]: W0318 13:05:41.906897 3938 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:05:41.909318 master-0 kubenswrapper[3938]: W0318 13:05:41.906904 3938 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:05:41.909318 master-0 kubenswrapper[3938]: W0318 13:05:41.906912 3938 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:05:41.909318 master-0 kubenswrapper[3938]: W0318 13:05:41.906922 3938 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:05:41.909318 master-0 kubenswrapper[3938]: W0318 13:05:41.906932 3938 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:05:41.909318 master-0 kubenswrapper[3938]: W0318 13:05:41.906964 3938 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:05:41.909318 master-0 kubenswrapper[3938]: W0318 13:05:41.906972 3938 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:05:41.910469 master-0 kubenswrapper[3938]: I0318 13:05:41.906986 3938 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:05:41.910469 master-0 kubenswrapper[3938]: W0318 13:05:41.907216 3938 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:05:41.910469 master-0 kubenswrapper[3938]: W0318 13:05:41.907232 3938 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:05:41.910469 master-0 kubenswrapper[3938]: W0318 13:05:41.907241 3938 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:05:41.910469 master-0 kubenswrapper[3938]: W0318 13:05:41.907249 3938 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:05:41.910469 master-0 kubenswrapper[3938]: W0318 13:05:41.907258 3938 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:05:41.910469 master-0 kubenswrapper[3938]: W0318 13:05:41.907266 3938 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:05:41.910469 master-0 kubenswrapper[3938]: W0318 13:05:41.907274 3938 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:05:41.910469 master-0 kubenswrapper[3938]: W0318 13:05:41.907282 3938 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:05:41.910469 master-0 kubenswrapper[3938]: W0318 13:05:41.907289 3938 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:05:41.910469 master-0 kubenswrapper[3938]: W0318 13:05:41.907297 3938 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:05:41.910469 master-0 kubenswrapper[3938]: W0318 13:05:41.907305 3938 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:05:41.910469 master-0 kubenswrapper[3938]: W0318 13:05:41.907313 3938 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:05:41.910469 master-0 kubenswrapper[3938]: W0318 13:05:41.907321 3938 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:05:41.910469 master-0 kubenswrapper[3938]: W0318 13:05:41.907328 3938 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907339 3938 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907349 3938 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907359 3938 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907369 3938 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907379 3938 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907388 3938 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907395 3938 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907403 3938 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907411 3938 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907419 3938 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907427 3938 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907435 3938 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907442 3938 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907450 3938 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907458 3938 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907466 3938 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907473 3938 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907481 3938 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:05:41.911433 master-0 kubenswrapper[3938]: W0318 13:05:41.907488 3938 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907497 3938 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907505 3938 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907513 3938 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907522 3938 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907529 3938 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907537 3938 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907545 3938 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907554 3938 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907564 3938 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907574 3938 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907583 3938 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907590 3938 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907598 3938 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907606 3938 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907614 3938 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907622 3938 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907629 3938 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907637 3938 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907645 3938 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:05:41.912687 master-0 kubenswrapper[3938]: W0318 13:05:41.907652 3938 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907659 3938 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907667 3938 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907675 3938 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907683 3938 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907691 3938 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907699 3938 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907707 3938 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907714 3938 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907722 3938 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907729 3938 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907739 3938 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907748 3938 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907756 3938 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907764 3938 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907771 3938 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907779 3938 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907788 3938 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907795 3938 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:05:41.913811 master-0 kubenswrapper[3938]: W0318 13:05:41.907803 3938 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:05:41.914724 master-0 kubenswrapper[3938]: I0318 13:05:41.907816 3938 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:05:41.914724 master-0 kubenswrapper[3938]: I0318 13:05:41.908215 3938 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 13:05:41.917266 master-0 kubenswrapper[3938]: I0318 13:05:41.917213 3938 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 18 13:05:41.918446 master-0 kubenswrapper[3938]: I0318 13:05:41.918405 3938 server.go:997] "Starting client certificate rotation" Mar 18 13:05:41.918514 master-0 kubenswrapper[3938]: I0318 13:05:41.918448 3938 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 13:05:41.918738 master-0 kubenswrapper[3938]: I0318 13:05:41.918665 3938 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 13:05:41.996431 master-0 kubenswrapper[3938]: I0318 13:05:41.996344 3938 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 13:05:42.005631 master-0 kubenswrapper[3938]: I0318 13:05:42.005449 3938 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 13:05:42.006083 master-0 kubenswrapper[3938]: E0318 13:05:42.006009 3938 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:42.044364 master-0 kubenswrapper[3938]: I0318 13:05:42.044246 3938 log.go:25] "Validated CRI v1 runtime API" Mar 18 13:05:42.053988 master-0 kubenswrapper[3938]: I0318 13:05:42.053884 3938 log.go:25] "Validated CRI v1 image API" Mar 18 13:05:42.057586 master-0 kubenswrapper[3938]: I0318 13:05:42.057531 3938 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 13:05:42.066376 master-0 kubenswrapper[3938]: I0318 13:05:42.066294 3938 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 81ff0aa5-030f-4028-8e1c-14208afe7bfb:/dev/vda3 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 18 13:05:42.066376 master-0 kubenswrapper[3938]: I0318 13:05:42.066355 3938 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 18 13:05:42.084665 master-0 kubenswrapper[3938]: I0318 13:05:42.084266 3938 manager.go:217] Machine: {Timestamp:2026-03-18 13:05:42.083300705 +0000 UTC m=+0.619047560 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ba707060b4b44f7a95adbd0306be6534 SystemUUID:ba707060-b4b4-4f7a-95ad-bd0306be6534 BootID:d4169b54-c5ea-4f66-b18c-82f9506641bd Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:25:c2:a7 Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:12:bd:01:20:1c:b1 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 13:05:42.084665 master-0 kubenswrapper[3938]: I0318 13:05:42.084609 3938 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 13:05:42.084962 master-0 kubenswrapper[3938]: I0318 13:05:42.084742 3938 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 13:05:42.085208 master-0 kubenswrapper[3938]: I0318 13:05:42.085165 3938 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 13:05:42.085478 master-0 kubenswrapper[3938]: I0318 13:05:42.085393 3938 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 13:05:42.085766 master-0 kubenswrapper[3938]: I0318 13:05:42.085466 3938 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 13:05:42.086802 master-0 kubenswrapper[3938]: I0318 13:05:42.086761 3938 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 13:05:42.086802 master-0 kubenswrapper[3938]: I0318 13:05:42.086790 3938 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 13:05:42.087093 master-0 kubenswrapper[3938]: I0318 13:05:42.086896 3938 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 13:05:42.087093 master-0 kubenswrapper[3938]: I0318 13:05:42.086925 3938 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 13:05:42.087200 master-0 kubenswrapper[3938]: I0318 13:05:42.087168 3938 state_mem.go:36] "Initialized new in-memory state store" Mar 18 13:05:42.087307 master-0 kubenswrapper[3938]: I0318 13:05:42.087268 3938 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 13:05:42.093977 master-0 kubenswrapper[3938]: I0318 13:05:42.093874 3938 kubelet.go:418] "Attempting to sync node with API server" Mar 18 13:05:42.093977 master-0 kubenswrapper[3938]: I0318 13:05:42.093907 3938 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 13:05:42.093977 master-0 kubenswrapper[3938]: I0318 13:05:42.093950 3938 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 13:05:42.093977 master-0 kubenswrapper[3938]: I0318 13:05:42.093969 3938 kubelet.go:324] "Adding apiserver pod source" Mar 18 13:05:42.094703 master-0 kubenswrapper[3938]: I0318 13:05:42.094065 3938 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 13:05:42.099862 master-0 kubenswrapper[3938]: I0318 13:05:42.099780 3938 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 13:05:42.108316 master-0 kubenswrapper[3938]: W0318 13:05:42.108219 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:42.108456 master-0 kubenswrapper[3938]: W0318 13:05:42.108310 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:42.108456 master-0 kubenswrapper[3938]: E0318 13:05:42.108406 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:42.108456 master-0 kubenswrapper[3938]: E0318 13:05:42.108441 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:42.110344 master-0 kubenswrapper[3938]: I0318 13:05:42.110305 3938 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 13:05:42.110571 master-0 kubenswrapper[3938]: I0318 13:05:42.110541 3938 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 13:05:42.110571 master-0 kubenswrapper[3938]: I0318 13:05:42.110566 3938 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 13:05:42.110723 master-0 kubenswrapper[3938]: I0318 13:05:42.110577 3938 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 13:05:42.110723 master-0 kubenswrapper[3938]: I0318 13:05:42.110586 3938 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 13:05:42.110723 master-0 kubenswrapper[3938]: I0318 13:05:42.110595 3938 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 13:05:42.110723 master-0 kubenswrapper[3938]: I0318 13:05:42.110604 3938 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 13:05:42.110723 master-0 kubenswrapper[3938]: I0318 13:05:42.110613 3938 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 13:05:42.110723 master-0 kubenswrapper[3938]: I0318 13:05:42.110622 3938 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 13:05:42.110723 master-0 kubenswrapper[3938]: I0318 13:05:42.110633 3938 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 13:05:42.110723 master-0 kubenswrapper[3938]: I0318 13:05:42.110642 3938 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 13:05:42.110723 master-0 kubenswrapper[3938]: I0318 13:05:42.110678 3938 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 13:05:42.114065 master-0 kubenswrapper[3938]: I0318 13:05:42.114010 3938 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 13:05:42.114181 master-0 kubenswrapper[3938]: I0318 13:05:42.114078 3938 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 13:05:42.114634 master-0 kubenswrapper[3938]: I0318 13:05:42.114600 3938 server.go:1280] "Started kubelet" Mar 18 13:05:42.116234 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 13:05:42.121609 master-0 kubenswrapper[3938]: I0318 13:05:42.121315 3938 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 13:05:42.121707 master-0 kubenswrapper[3938]: I0318 13:05:42.121549 3938 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 13:05:42.121922 master-0 kubenswrapper[3938]: I0318 13:05:42.121829 3938 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 13:05:42.122333 master-0 kubenswrapper[3938]: I0318 13:05:42.122250 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:42.122702 master-0 kubenswrapper[3938]: I0318 13:05:42.122656 3938 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 13:05:42.125062 master-0 kubenswrapper[3938]: I0318 13:05:42.125000 3938 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 13:05:42.125143 master-0 kubenswrapper[3938]: I0318 13:05:42.125086 3938 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 13:05:42.125313 master-0 kubenswrapper[3938]: I0318 13:05:42.125270 3938 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 13:05:42.125363 master-0 kubenswrapper[3938]: I0318 13:05:42.125319 3938 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 13:05:42.125417 master-0 kubenswrapper[3938]: I0318 13:05:42.125324 3938 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 13:05:42.125464 master-0 kubenswrapper[3938]: E0318 13:05:42.125276 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:42.136124 master-0 kubenswrapper[3938]: I0318 13:05:42.136098 3938 reconstruct.go:97] "Volume reconstruction finished" Mar 18 13:05:42.136224 master-0 kubenswrapper[3938]: I0318 13:05:42.136211 3938 reconciler.go:26] "Reconciler: start to sync state" Mar 18 13:05:42.138496 master-0 kubenswrapper[3938]: I0318 13:05:42.138419 3938 server.go:449] "Adding debug handlers to kubelet server" Mar 18 13:05:42.138677 master-0 kubenswrapper[3938]: I0318 13:05:42.138432 3938 factory.go:55] Registering systemd factory Mar 18 13:05:42.138742 master-0 kubenswrapper[3938]: I0318 13:05:42.138704 3938 factory.go:221] Registration of the systemd container factory successfully Mar 18 13:05:42.141064 master-0 kubenswrapper[3938]: I0318 13:05:42.141013 3938 factory.go:153] Registering CRI-O factory Mar 18 13:05:42.141064 master-0 kubenswrapper[3938]: I0318 13:05:42.141064 3938 factory.go:221] Registration of the crio container factory successfully Mar 18 13:05:42.141177 master-0 kubenswrapper[3938]: I0318 13:05:42.141150 3938 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 13:05:42.141228 master-0 kubenswrapper[3938]: I0318 13:05:42.141192 3938 factory.go:103] Registering Raw factory Mar 18 13:05:42.141228 master-0 kubenswrapper[3938]: I0318 13:05:42.141215 3938 manager.go:1196] Started watching for new ooms in manager Mar 18 13:05:42.141393 master-0 kubenswrapper[3938]: E0318 13:05:42.141329 3938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 18 13:05:42.141393 master-0 kubenswrapper[3938]: W0318 13:05:42.141332 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:42.141476 master-0 kubenswrapper[3938]: E0318 13:05:42.141427 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:42.141985 master-0 kubenswrapper[3938]: I0318 13:05:42.141918 3938 manager.go:319] Starting recovery of all containers Mar 18 13:05:42.146077 master-0 kubenswrapper[3938]: E0318 13:05:42.142863 3938 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189df14d326c1b65 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.114564965 +0000 UTC m=+0.650311780,LastTimestamp:2026-03-18 13:05:42.114564965 +0000 UTC m=+0.650311780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:42.150113 master-0 kubenswrapper[3938]: E0318 13:05:42.150070 3938 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 18 13:05:42.164469 master-0 kubenswrapper[3938]: I0318 13:05:42.163973 3938 manager.go:324] Recovery completed Mar 18 13:05:42.176994 master-0 kubenswrapper[3938]: I0318 13:05:42.176921 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:42.179954 master-0 kubenswrapper[3938]: I0318 13:05:42.179887 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:42.180048 master-0 kubenswrapper[3938]: I0318 13:05:42.179968 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:42.180048 master-0 kubenswrapper[3938]: I0318 13:05:42.179985 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:42.180841 master-0 kubenswrapper[3938]: I0318 13:05:42.180817 3938 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 13:05:42.180928 master-0 kubenswrapper[3938]: I0318 13:05:42.180913 3938 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 13:05:42.181025 master-0 kubenswrapper[3938]: I0318 13:05:42.181012 3938 state_mem.go:36] "Initialized new in-memory state store" Mar 18 13:05:42.226008 master-0 kubenswrapper[3938]: E0318 13:05:42.225877 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:42.308818 master-0 kubenswrapper[3938]: I0318 13:05:42.308677 3938 policy_none.go:49] "None policy: Start" Mar 18 13:05:42.310234 master-0 kubenswrapper[3938]: I0318 13:05:42.310203 3938 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 13:05:42.310315 master-0 kubenswrapper[3938]: I0318 13:05:42.310276 3938 state_mem.go:35] "Initializing new in-memory state store" Mar 18 13:05:42.326455 master-0 kubenswrapper[3938]: E0318 13:05:42.326379 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: I0318 13:05:42.337587 3938 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: I0318 13:05:42.341004 3938 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: I0318 13:05:42.341077 3938 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: I0318 13:05:42.341107 3938 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: E0318 13:05:42.341179 3938 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: E0318 13:05:42.344388 3938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: W0318 13:05:42.344421 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: E0318 13:05:42.344524 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: E0318 13:05:42.427175 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: E0318 13:05:42.441560 3938 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: I0318 13:05:42.463479 3938 manager.go:334] "Starting Device Plugin manager" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: I0318 13:05:42.463544 3938 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: I0318 13:05:42.463563 3938 server.go:79] "Starting device plugin registration server" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: I0318 13:05:42.464173 3938 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: I0318 13:05:42.464208 3938 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: I0318 13:05:42.464342 3938 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: I0318 13:05:42.464530 3938 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: I0318 13:05:42.464588 3938 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 13:05:42.480633 master-0 kubenswrapper[3938]: E0318 13:05:42.467069 3938 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 13:05:42.565172 master-0 kubenswrapper[3938]: I0318 13:05:42.565039 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:42.566666 master-0 kubenswrapper[3938]: I0318 13:05:42.566585 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:42.566826 master-0 kubenswrapper[3938]: I0318 13:05:42.566680 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:42.566826 master-0 kubenswrapper[3938]: I0318 13:05:42.566698 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:42.566826 master-0 kubenswrapper[3938]: I0318 13:05:42.566773 3938 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:05:42.569135 master-0 kubenswrapper[3938]: E0318 13:05:42.569069 3938 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 13:05:42.642344 master-0 kubenswrapper[3938]: I0318 13:05:42.642218 3938 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 13:05:42.642344 master-0 kubenswrapper[3938]: I0318 13:05:42.642341 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:42.643690 master-0 kubenswrapper[3938]: I0318 13:05:42.643645 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:42.643690 master-0 kubenswrapper[3938]: I0318 13:05:42.643692 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:42.643869 master-0 kubenswrapper[3938]: I0318 13:05:42.643710 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:42.643869 master-0 kubenswrapper[3938]: I0318 13:05:42.643857 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:42.645058 master-0 kubenswrapper[3938]: I0318 13:05:42.644932 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:42.645058 master-0 kubenswrapper[3938]: I0318 13:05:42.645018 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:42.645058 master-0 kubenswrapper[3938]: I0318 13:05:42.645034 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:42.645482 master-0 kubenswrapper[3938]: I0318 13:05:42.645455 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:42.645482 master-0 kubenswrapper[3938]: I0318 13:05:42.645465 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:05:42.645656 master-0 kubenswrapper[3938]: I0318 13:05:42.645573 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:42.645656 master-0 kubenswrapper[3938]: I0318 13:05:42.645612 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:05:42.645785 master-0 kubenswrapper[3938]: I0318 13:05:42.645667 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:42.646465 master-0 kubenswrapper[3938]: I0318 13:05:42.646386 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:42.646465 master-0 kubenswrapper[3938]: I0318 13:05:42.646443 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:42.646465 master-0 kubenswrapper[3938]: I0318 13:05:42.646459 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:42.647251 master-0 kubenswrapper[3938]: I0318 13:05:42.646570 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:42.647251 master-0 kubenswrapper[3938]: I0318 13:05:42.646702 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.647251 master-0 kubenswrapper[3938]: I0318 13:05:42.646739 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:42.647251 master-0 kubenswrapper[3938]: I0318 13:05:42.646741 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:42.647251 master-0 kubenswrapper[3938]: I0318 13:05:42.646798 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:42.647251 master-0 kubenswrapper[3938]: I0318 13:05:42.646823 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:42.648451 master-0 kubenswrapper[3938]: I0318 13:05:42.647328 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:42.648451 master-0 kubenswrapper[3938]: I0318 13:05:42.647441 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:42.648451 master-0 kubenswrapper[3938]: I0318 13:05:42.647521 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:42.648451 master-0 kubenswrapper[3938]: I0318 13:05:42.647823 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:42.648451 master-0 kubenswrapper[3938]: I0318 13:05:42.647863 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:42.648451 master-0 kubenswrapper[3938]: I0318 13:05:42.647886 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:42.648451 master-0 kubenswrapper[3938]: I0318 13:05:42.647864 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:42.648451 master-0 kubenswrapper[3938]: I0318 13:05:42.648011 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:42.648451 master-0 kubenswrapper[3938]: I0318 13:05:42.648036 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:42.648451 master-0 kubenswrapper[3938]: I0318 13:05:42.648191 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:42.649374 master-0 kubenswrapper[3938]: I0318 13:05:42.648466 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:42.649374 master-0 kubenswrapper[3938]: I0318 13:05:42.648524 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:42.649552 master-0 kubenswrapper[3938]: I0318 13:05:42.649383 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:42.649552 master-0 kubenswrapper[3938]: I0318 13:05:42.649426 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:42.649552 master-0 kubenswrapper[3938]: I0318 13:05:42.649442 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:42.649787 master-0 kubenswrapper[3938]: I0318 13:05:42.649592 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:05:42.649787 master-0 kubenswrapper[3938]: I0318 13:05:42.649624 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:42.651800 master-0 kubenswrapper[3938]: I0318 13:05:42.651681 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:42.651800 master-0 kubenswrapper[3938]: I0318 13:05:42.651762 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:42.651800 master-0 kubenswrapper[3938]: I0318 13:05:42.651781 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:42.652382 master-0 kubenswrapper[3938]: I0318 13:05:42.652313 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:42.652382 master-0 kubenswrapper[3938]: I0318 13:05:42.652368 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:42.652382 master-0 kubenswrapper[3938]: I0318 13:05:42.652390 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:42.739268 master-0 kubenswrapper[3938]: I0318 13:05:42.739127 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.739268 master-0 kubenswrapper[3938]: I0318 13:05:42.739187 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:42.739268 master-0 kubenswrapper[3938]: I0318 13:05:42.739222 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:42.739268 master-0 kubenswrapper[3938]: I0318 13:05:42.739259 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:42.739644 master-0 kubenswrapper[3938]: I0318 13:05:42.739315 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:42.739644 master-0 kubenswrapper[3938]: I0318 13:05:42.739357 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:05:42.739644 master-0 kubenswrapper[3938]: I0318 13:05:42.739401 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.739644 master-0 kubenswrapper[3938]: I0318 13:05:42.739443 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.739644 master-0 kubenswrapper[3938]: I0318 13:05:42.739474 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:05:42.739644 master-0 kubenswrapper[3938]: I0318 13:05:42.739504 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:05:42.739644 master-0 kubenswrapper[3938]: I0318 13:05:42.739532 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:05:42.739644 master-0 kubenswrapper[3938]: I0318 13:05:42.739566 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.739644 master-0 kubenswrapper[3938]: I0318 13:05:42.739605 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:42.739644 master-0 kubenswrapper[3938]: I0318 13:05:42.739641 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.740257 master-0 kubenswrapper[3938]: I0318 13:05:42.739670 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:05:42.740257 master-0 kubenswrapper[3938]: I0318 13:05:42.739697 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:05:42.740257 master-0 kubenswrapper[3938]: I0318 13:05:42.739736 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.746510 master-0 kubenswrapper[3938]: E0318 13:05:42.746437 3938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 18 13:05:42.769519 master-0 kubenswrapper[3938]: I0318 13:05:42.769454 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:42.770804 master-0 kubenswrapper[3938]: I0318 13:05:42.770744 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:42.770804 master-0 kubenswrapper[3938]: I0318 13:05:42.770801 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:42.771026 master-0 kubenswrapper[3938]: I0318 13:05:42.770818 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:42.771026 master-0 kubenswrapper[3938]: I0318 13:05:42.770882 3938 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:05:42.772252 master-0 kubenswrapper[3938]: E0318 13:05:42.772179 3938 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 13:05:42.841118 master-0 kubenswrapper[3938]: I0318 13:05:42.841032 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.841118 master-0 kubenswrapper[3938]: I0318 13:05:42.841113 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:42.841438 master-0 kubenswrapper[3938]: I0318 13:05:42.841152 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:05:42.841438 master-0 kubenswrapper[3938]: I0318 13:05:42.841237 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:42.841438 master-0 kubenswrapper[3938]: I0318 13:05:42.841269 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.841438 master-0 kubenswrapper[3938]: I0318 13:05:42.841306 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:05:42.841438 master-0 kubenswrapper[3938]: I0318 13:05:42.841342 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.841438 master-0 kubenswrapper[3938]: I0318 13:05:42.841361 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:05:42.841438 master-0 kubenswrapper[3938]: I0318 13:05:42.841377 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.841438 master-0 kubenswrapper[3938]: I0318 13:05:42.841410 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:05:42.841438 master-0 kubenswrapper[3938]: I0318 13:05:42.841415 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:05:42.841438 master-0 kubenswrapper[3938]: I0318 13:05:42.841441 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:05:42.841438 master-0 kubenswrapper[3938]: I0318 13:05:42.841460 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.842187 master-0 kubenswrapper[3938]: I0318 13:05:42.841476 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.842187 master-0 kubenswrapper[3938]: I0318 13:05:42.841508 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.842587 master-0 kubenswrapper[3938]: I0318 13:05:42.842391 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:42.842587 master-0 kubenswrapper[3938]: I0318 13:05:42.841498 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.842587 master-0 kubenswrapper[3938]: I0318 13:05:42.842469 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.842587 master-0 kubenswrapper[3938]: I0318 13:05:42.842506 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:05:42.842587 master-0 kubenswrapper[3938]: I0318 13:05:42.842538 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:42.842587 master-0 kubenswrapper[3938]: I0318 13:05:42.842438 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:05:42.843119 master-0 kubenswrapper[3938]: I0318 13:05:42.842657 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:42.843119 master-0 kubenswrapper[3938]: I0318 13:05:42.842717 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:42.843119 master-0 kubenswrapper[3938]: I0318 13:05:42.842773 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.843119 master-0 kubenswrapper[3938]: I0318 13:05:42.842806 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:42.843119 master-0 kubenswrapper[3938]: I0318 13:05:42.842863 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:42.843119 master-0 kubenswrapper[3938]: I0318 13:05:42.842892 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:42.843119 master-0 kubenswrapper[3938]: I0318 13:05:42.842992 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:05:42.843119 master-0 kubenswrapper[3938]: I0318 13:05:42.843056 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:42.843119 master-0 kubenswrapper[3938]: I0318 13:05:42.843091 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:05:42.843119 master-0 kubenswrapper[3938]: I0318 13:05:42.843118 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:05:42.843652 master-0 kubenswrapper[3938]: I0318 13:05:42.843167 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.843652 master-0 kubenswrapper[3938]: I0318 13:05:42.843211 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:42.843652 master-0 kubenswrapper[3938]: I0318 13:05:42.843265 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:05:42.981277 master-0 kubenswrapper[3938]: I0318 13:05:42.981157 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:05:43.008313 master-0 kubenswrapper[3938]: I0318 13:05:43.008174 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:05:43.035535 master-0 kubenswrapper[3938]: I0318 13:05:43.035098 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:43.040130 master-0 kubenswrapper[3938]: W0318 13:05:43.040038 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:43.040262 master-0 kubenswrapper[3938]: E0318 13:05:43.040149 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:43.044394 master-0 kubenswrapper[3938]: W0318 13:05:43.044303 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:43.044394 master-0 kubenswrapper[3938]: E0318 13:05:43.044380 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:43.071235 master-0 kubenswrapper[3938]: I0318 13:05:43.071110 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:43.082516 master-0 kubenswrapper[3938]: I0318 13:05:43.082399 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:05:43.124394 master-0 kubenswrapper[3938]: I0318 13:05:43.124314 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:43.173255 master-0 kubenswrapper[3938]: I0318 13:05:43.173124 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:43.174749 master-0 kubenswrapper[3938]: I0318 13:05:43.174656 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:43.174749 master-0 kubenswrapper[3938]: I0318 13:05:43.174721 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:43.174749 master-0 kubenswrapper[3938]: I0318 13:05:43.174745 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:43.175232 master-0 kubenswrapper[3938]: I0318 13:05:43.174802 3938 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:05:43.175962 master-0 kubenswrapper[3938]: E0318 13:05:43.175859 3938 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 13:05:43.548173 master-0 kubenswrapper[3938]: E0318 13:05:43.548086 3938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 18 13:05:43.637443 master-0 kubenswrapper[3938]: W0318 13:05:43.637335 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:43.637645 master-0 kubenswrapper[3938]: E0318 13:05:43.637443 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:43.681495 master-0 kubenswrapper[3938]: W0318 13:05:43.681372 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:43.681495 master-0 kubenswrapper[3938]: E0318 13:05:43.681478 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:43.976400 master-0 kubenswrapper[3938]: I0318 13:05:43.976271 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:43.977970 master-0 kubenswrapper[3938]: I0318 13:05:43.977888 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:43.978146 master-0 kubenswrapper[3938]: I0318 13:05:43.977989 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:43.978146 master-0 kubenswrapper[3938]: I0318 13:05:43.978005 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:43.978146 master-0 kubenswrapper[3938]: I0318 13:05:43.978081 3938 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:05:43.979374 master-0 kubenswrapper[3938]: E0318 13:05:43.979318 3938 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 13:05:44.151160 master-0 kubenswrapper[3938]: I0318 13:05:44.151051 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:44.169278 master-0 kubenswrapper[3938]: I0318 13:05:44.169219 3938 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 13:05:44.170715 master-0 kubenswrapper[3938]: E0318 13:05:44.170655 3938 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:44.694291 master-0 kubenswrapper[3938]: W0318 13:05:44.694207 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:44.694291 master-0 kubenswrapper[3938]: E0318 13:05:44.694285 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:45.124320 master-0 kubenswrapper[3938]: I0318 13:05:45.124200 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:45.149351 master-0 kubenswrapper[3938]: E0318 13:05:45.149287 3938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 18 13:05:45.357911 master-0 kubenswrapper[3938]: W0318 13:05:45.357825 3938 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f265536aba6292ead501bc9b49f327.slice/crio-f8ae6d060a44d48f0a3c581d701c99ae6804b630252206cc7208922bed8db289 WatchSource:0}: Error finding container f8ae6d060a44d48f0a3c581d701c99ae6804b630252206cc7208922bed8db289: Status 404 returned error can't find the container with id f8ae6d060a44d48f0a3c581d701c99ae6804b630252206cc7208922bed8db289 Mar 18 13:05:45.369172 master-0 kubenswrapper[3938]: I0318 13:05:45.369117 3938 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 13:05:45.376777 master-0 kubenswrapper[3938]: W0318 13:05:45.376716 3938 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49fac1b46a11e49501805e891baae4a9.slice/crio-a296b3a4304897ed7576fe44b518ce3d5fa743a93c59d520901b7af6be80a014 WatchSource:0}: Error finding container a296b3a4304897ed7576fe44b518ce3d5fa743a93c59d520901b7af6be80a014: Status 404 returned error can't find the container with id a296b3a4304897ed7576fe44b518ce3d5fa743a93c59d520901b7af6be80a014 Mar 18 13:05:45.396232 master-0 kubenswrapper[3938]: W0318 13:05:45.396139 3938 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd664a6d0d2a24360dee10612610f1b59.slice/crio-70245a400781d2a78e9a27a22733df63043f95251d557ab2f0c87663ff3421fb WatchSource:0}: Error finding container 70245a400781d2a78e9a27a22733df63043f95251d557ab2f0c87663ff3421fb: Status 404 returned error can't find the container with id 70245a400781d2a78e9a27a22733df63043f95251d557ab2f0c87663ff3421fb Mar 18 13:05:45.434724 master-0 kubenswrapper[3938]: W0318 13:05:45.434660 3938 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc83737980b9ee109184b1d78e942cf36.slice/crio-9e0ad9e4d46022da9225ef1364382c88cb4b32388cd7035e1c00337bf6332812 WatchSource:0}: Error finding container 9e0ad9e4d46022da9225ef1364382c88cb4b32388cd7035e1c00337bf6332812: Status 404 returned error can't find the container with id 9e0ad9e4d46022da9225ef1364382c88cb4b32388cd7035e1c00337bf6332812 Mar 18 13:05:45.534107 master-0 kubenswrapper[3938]: W0318 13:05:45.534049 3938 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1249822f86f23526277d165c0d5d3c19.slice/crio-80f499fc9da9c64215ee87b646e32e10113a3a485956cfb50964722faffe7405 WatchSource:0}: Error finding container 80f499fc9da9c64215ee87b646e32e10113a3a485956cfb50964722faffe7405: Status 404 returned error can't find the container with id 80f499fc9da9c64215ee87b646e32e10113a3a485956cfb50964722faffe7405 Mar 18 13:05:45.580334 master-0 kubenswrapper[3938]: I0318 13:05:45.580230 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:45.581902 master-0 kubenswrapper[3938]: I0318 13:05:45.581841 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:45.581902 master-0 kubenswrapper[3938]: I0318 13:05:45.581889 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:45.581902 master-0 kubenswrapper[3938]: I0318 13:05:45.581905 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:45.582234 master-0 kubenswrapper[3938]: I0318 13:05:45.582000 3938 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:05:45.583072 master-0 kubenswrapper[3938]: E0318 13:05:45.583006 3938 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 13:05:46.124023 master-0 kubenswrapper[3938]: I0318 13:05:46.123907 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:46.222739 master-0 kubenswrapper[3938]: W0318 13:05:46.222687 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:46.222986 master-0 kubenswrapper[3938]: E0318 13:05:46.222756 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:46.353371 master-0 kubenswrapper[3938]: I0318 13:05:46.353250 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"80f499fc9da9c64215ee87b646e32e10113a3a485956cfb50964722faffe7405"} Mar 18 13:05:46.354950 master-0 kubenswrapper[3938]: I0318 13:05:46.354858 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"9e0ad9e4d46022da9225ef1364382c88cb4b32388cd7035e1c00337bf6332812"} Mar 18 13:05:46.355748 master-0 kubenswrapper[3938]: I0318 13:05:46.355712 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"70245a400781d2a78e9a27a22733df63043f95251d557ab2f0c87663ff3421fb"} Mar 18 13:05:46.356569 master-0 kubenswrapper[3938]: I0318 13:05:46.356544 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"a296b3a4304897ed7576fe44b518ce3d5fa743a93c59d520901b7af6be80a014"} Mar 18 13:05:46.357348 master-0 kubenswrapper[3938]: I0318 13:05:46.357323 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"f8ae6d060a44d48f0a3c581d701c99ae6804b630252206cc7208922bed8db289"} Mar 18 13:05:46.646296 master-0 kubenswrapper[3938]: W0318 13:05:46.646228 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:46.646296 master-0 kubenswrapper[3938]: E0318 13:05:46.646286 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:46.843916 master-0 kubenswrapper[3938]: W0318 13:05:46.843817 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:46.843916 master-0 kubenswrapper[3938]: E0318 13:05:46.843889 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:47.124326 master-0 kubenswrapper[3938]: I0318 13:05:47.124242 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:47.968462 master-0 kubenswrapper[3938]: E0318 13:05:47.968323 3938 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189df14d326c1b65 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.114564965 +0000 UTC m=+0.650311780,LastTimestamp:2026-03-18 13:05:42.114564965 +0000 UTC m=+0.650311780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:48.123405 master-0 kubenswrapper[3938]: I0318 13:05:48.123353 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:48.298626 master-0 kubenswrapper[3938]: I0318 13:05:48.298477 3938 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 13:05:48.299836 master-0 kubenswrapper[3938]: E0318 13:05:48.299809 3938 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:48.350646 master-0 kubenswrapper[3938]: E0318 13:05:48.350572 3938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 18 13:05:48.784275 master-0 kubenswrapper[3938]: I0318 13:05:48.784227 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:48.785732 master-0 kubenswrapper[3938]: I0318 13:05:48.785696 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:48.785796 master-0 kubenswrapper[3938]: I0318 13:05:48.785739 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:48.785796 master-0 kubenswrapper[3938]: I0318 13:05:48.785752 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:48.785867 master-0 kubenswrapper[3938]: I0318 13:05:48.785808 3938 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:05:48.786810 master-0 kubenswrapper[3938]: E0318 13:05:48.786736 3938 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 13:05:49.124473 master-0 kubenswrapper[3938]: I0318 13:05:49.124122 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:49.202989 master-0 kubenswrapper[3938]: W0318 13:05:49.202918 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:49.202989 master-0 kubenswrapper[3938]: E0318 13:05:49.203007 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:49.364514 master-0 kubenswrapper[3938]: I0318 13:05:49.364405 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"8c058013257c1127015215f369eac1266cd3afbabe83f1a844ab4b8fd221030c"} Mar 18 13:05:49.365977 master-0 kubenswrapper[3938]: I0318 13:05:49.365900 3938 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="869e4216741d6f450122345795f65e862d784b38e4a915e11371713c52cf93a3" exitCode=0 Mar 18 13:05:49.366771 master-0 kubenswrapper[3938]: I0318 13:05:49.366033 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:49.366771 master-0 kubenswrapper[3938]: I0318 13:05:49.366017 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"869e4216741d6f450122345795f65e862d784b38e4a915e11371713c52cf93a3"} Mar 18 13:05:49.366899 master-0 kubenswrapper[3938]: I0318 13:05:49.366784 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:49.366899 master-0 kubenswrapper[3938]: I0318 13:05:49.366812 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:49.366899 master-0 kubenswrapper[3938]: I0318 13:05:49.366855 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:50.124022 master-0 kubenswrapper[3938]: I0318 13:05:50.123975 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:50.370534 master-0 kubenswrapper[3938]: I0318 13:05:50.370487 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"5a3aca2c595b11887478c52a2dc852e00eca370600ab4c4c0f7434b0e3d7365e"} Mar 18 13:05:50.370534 master-0 kubenswrapper[3938]: I0318 13:05:50.370536 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:50.371583 master-0 kubenswrapper[3938]: I0318 13:05:50.371564 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:50.371633 master-0 kubenswrapper[3938]: I0318 13:05:50.371586 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:50.371633 master-0 kubenswrapper[3938]: I0318 13:05:50.371595 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:50.374947 master-0 kubenswrapper[3938]: I0318 13:05:50.374591 3938 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 18 13:05:50.375036 master-0 kubenswrapper[3938]: I0318 13:05:50.375009 3938 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="ff012d1908e6c18a073bb9839dbc954688b6f38c893b7256c001776ebecd526b" exitCode=1 Mar 18 13:05:50.375096 master-0 kubenswrapper[3938]: I0318 13:05:50.375040 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"ff012d1908e6c18a073bb9839dbc954688b6f38c893b7256c001776ebecd526b"} Mar 18 13:05:50.375096 master-0 kubenswrapper[3938]: I0318 13:05:50.375089 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:50.375828 master-0 kubenswrapper[3938]: I0318 13:05:50.375805 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:50.375886 master-0 kubenswrapper[3938]: I0318 13:05:50.375832 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:50.375886 master-0 kubenswrapper[3938]: I0318 13:05:50.375848 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:50.376209 master-0 kubenswrapper[3938]: I0318 13:05:50.376183 3938 scope.go:117] "RemoveContainer" containerID="ff012d1908e6c18a073bb9839dbc954688b6f38c893b7256c001776ebecd526b" Mar 18 13:05:51.086243 master-0 kubenswrapper[3938]: W0318 13:05:51.086065 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:51.086243 master-0 kubenswrapper[3938]: E0318 13:05:51.086148 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:51.123775 master-0 kubenswrapper[3938]: I0318 13:05:51.123681 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:51.376579 master-0 kubenswrapper[3938]: I0318 13:05:51.376473 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:51.377231 master-0 kubenswrapper[3938]: I0318 13:05:51.377195 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:51.377231 master-0 kubenswrapper[3938]: I0318 13:05:51.377231 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:51.377337 master-0 kubenswrapper[3938]: I0318 13:05:51.377240 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:52.124217 master-0 kubenswrapper[3938]: I0318 13:05:52.124139 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:52.324233 master-0 kubenswrapper[3938]: W0318 13:05:52.324137 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:52.324233 master-0 kubenswrapper[3938]: E0318 13:05:52.324215 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:52.467228 master-0 kubenswrapper[3938]: E0318 13:05:52.467161 3938 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 13:05:52.913336 master-0 kubenswrapper[3938]: W0318 13:05:52.913245 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:52.913336 master-0 kubenswrapper[3938]: E0318 13:05:52.913326 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:53.123524 master-0 kubenswrapper[3938]: I0318 13:05:53.123376 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:53.382372 master-0 kubenswrapper[3938]: I0318 13:05:53.382202 3938 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="dceb07db18c0d8faeb0249820c09e2ecee50c97d0f9fd01d9a209e9a350fd96e" exitCode=0 Mar 18 13:05:53.382372 master-0 kubenswrapper[3938]: I0318 13:05:53.382300 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"dceb07db18c0d8faeb0249820c09e2ecee50c97d0f9fd01d9a209e9a350fd96e"} Mar 18 13:05:53.382699 master-0 kubenswrapper[3938]: I0318 13:05:53.382424 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:53.384001 master-0 kubenswrapper[3938]: I0318 13:05:53.383920 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:53.384001 master-0 kubenswrapper[3938]: I0318 13:05:53.383996 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:53.384187 master-0 kubenswrapper[3938]: I0318 13:05:53.384013 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:53.385986 master-0 kubenswrapper[3938]: I0318 13:05:53.385903 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"eec5f6ca3a758062e499f6115be65dea726d3162ea11a793f6a93a0de501edcb"} Mar 18 13:05:53.387472 master-0 kubenswrapper[3938]: I0318 13:05:53.387397 3938 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 13:05:53.387788 master-0 kubenswrapper[3938]: I0318 13:05:53.387647 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:53.388420 master-0 kubenswrapper[3938]: I0318 13:05:53.388171 3938 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 18 13:05:53.399997 master-0 kubenswrapper[3938]: I0318 13:05:53.388680 3938 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="bf1e8fd07a2ff43cfd320277fec3c6a5df3eeac3fccd083d6ab8482272a09365" exitCode=1 Mar 18 13:05:53.399997 master-0 kubenswrapper[3938]: I0318 13:05:53.388766 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"bf1e8fd07a2ff43cfd320277fec3c6a5df3eeac3fccd083d6ab8482272a09365"} Mar 18 13:05:53.399997 master-0 kubenswrapper[3938]: I0318 13:05:53.388775 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:53.399997 master-0 kubenswrapper[3938]: I0318 13:05:53.388823 3938 scope.go:117] "RemoveContainer" containerID="ff012d1908e6c18a073bb9839dbc954688b6f38c893b7256c001776ebecd526b" Mar 18 13:05:53.399997 master-0 kubenswrapper[3938]: I0318 13:05:53.390008 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:53.399997 master-0 kubenswrapper[3938]: I0318 13:05:53.390044 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:53.399997 master-0 kubenswrapper[3938]: I0318 13:05:53.390058 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:53.399997 master-0 kubenswrapper[3938]: I0318 13:05:53.390633 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:53.399997 master-0 kubenswrapper[3938]: I0318 13:05:53.390667 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:53.399997 master-0 kubenswrapper[3938]: I0318 13:05:53.390682 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:53.399997 master-0 kubenswrapper[3938]: I0318 13:05:53.391281 3938 scope.go:117] "RemoveContainer" containerID="bf1e8fd07a2ff43cfd320277fec3c6a5df3eeac3fccd083d6ab8482272a09365" Mar 18 13:05:53.399997 master-0 kubenswrapper[3938]: E0318 13:05:53.391521 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 13:05:53.404656 master-0 kubenswrapper[3938]: I0318 13:05:53.403199 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"40d2f52b6191fb64bc515d1f7e32cd3a0019730cc68c0ff9674d239a2fee21db"} Mar 18 13:05:53.404656 master-0 kubenswrapper[3938]: I0318 13:05:53.403337 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:53.404656 master-0 kubenswrapper[3938]: I0318 13:05:53.404569 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:53.404656 master-0 kubenswrapper[3938]: I0318 13:05:53.404592 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:53.404656 master-0 kubenswrapper[3938]: I0318 13:05:53.404605 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:54.411962 master-0 kubenswrapper[3938]: I0318 13:05:54.411695 3938 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 13:05:54.412594 master-0 kubenswrapper[3938]: I0318 13:05:54.412378 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:54.418003 master-0 kubenswrapper[3938]: I0318 13:05:54.413538 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:54.418003 master-0 kubenswrapper[3938]: I0318 13:05:54.413591 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:54.418003 master-0 kubenswrapper[3938]: I0318 13:05:54.413605 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:54.418003 master-0 kubenswrapper[3938]: I0318 13:05:54.414105 3938 scope.go:117] "RemoveContainer" containerID="bf1e8fd07a2ff43cfd320277fec3c6a5df3eeac3fccd083d6ab8482272a09365" Mar 18 13:05:54.418003 master-0 kubenswrapper[3938]: E0318 13:05:54.414270 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 13:05:54.418003 master-0 kubenswrapper[3938]: I0318 13:05:54.415149 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"7ddc54cddedd2bdae32224357d62187da26cebbd3a01e7a295c7e87fef85c020"} Mar 18 13:05:54.418003 master-0 kubenswrapper[3938]: I0318 13:05:54.416247 3938 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="eec5f6ca3a758062e499f6115be65dea726d3162ea11a793f6a93a0de501edcb" exitCode=1 Mar 18 13:05:54.418003 master-0 kubenswrapper[3938]: I0318 13:05:54.416299 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:54.418003 master-0 kubenswrapper[3938]: I0318 13:05:54.416565 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"eec5f6ca3a758062e499f6115be65dea726d3162ea11a793f6a93a0de501edcb"} Mar 18 13:05:54.418003 master-0 kubenswrapper[3938]: I0318 13:05:54.416903 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:54.418003 master-0 kubenswrapper[3938]: I0318 13:05:54.416923 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:54.418003 master-0 kubenswrapper[3938]: I0318 13:05:54.416931 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:55.172971 master-0 kubenswrapper[3938]: E0318 13:05:55.172865 3938 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 13:05:55.173198 master-0 kubenswrapper[3938]: I0318 13:05:55.173021 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:55.187098 master-0 kubenswrapper[3938]: I0318 13:05:55.187033 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:55.188056 master-0 kubenswrapper[3938]: I0318 13:05:55.188023 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:55.188143 master-0 kubenswrapper[3938]: I0318 13:05:55.188070 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:55.188143 master-0 kubenswrapper[3938]: I0318 13:05:55.188084 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:55.188227 master-0 kubenswrapper[3938]: I0318 13:05:55.188147 3938 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:05:55.192572 master-0 kubenswrapper[3938]: E0318 13:05:55.192535 3938 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 13:05:56.136486 master-0 kubenswrapper[3938]: I0318 13:05:56.136428 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:56.433704 master-0 kubenswrapper[3938]: I0318 13:05:56.433504 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"a907a02503b5df781613b6da0961b359781cced0221882a7b1a1568fee1b84fe"} Mar 18 13:05:56.433704 master-0 kubenswrapper[3938]: I0318 13:05:56.433606 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:56.436039 master-0 kubenswrapper[3938]: I0318 13:05:56.435535 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:56.436039 master-0 kubenswrapper[3938]: I0318 13:05:56.435562 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:56.436039 master-0 kubenswrapper[3938]: I0318 13:05:56.435580 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:56.437159 master-0 kubenswrapper[3938]: I0318 13:05:56.436864 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333"} Mar 18 13:05:56.437159 master-0 kubenswrapper[3938]: I0318 13:05:56.436917 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:56.437735 master-0 kubenswrapper[3938]: I0318 13:05:56.437391 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:56.437735 master-0 kubenswrapper[3938]: I0318 13:05:56.437406 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:56.437735 master-0 kubenswrapper[3938]: I0318 13:05:56.437413 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:56.437735 master-0 kubenswrapper[3938]: I0318 13:05:56.437549 3938 scope.go:117] "RemoveContainer" containerID="eec5f6ca3a758062e499f6115be65dea726d3162ea11a793f6a93a0de501edcb" Mar 18 13:05:57.009571 master-0 kubenswrapper[3938]: I0318 13:05:57.009199 3938 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 13:05:57.030183 master-0 kubenswrapper[3938]: I0318 13:05:57.030101 3938 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 18 13:05:57.128456 master-0 kubenswrapper[3938]: I0318 13:05:57.128388 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:57.442528 master-0 kubenswrapper[3938]: I0318 13:05:57.442500 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:57.443232 master-0 kubenswrapper[3938]: I0318 13:05:57.442507 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"25e53d87fc10cbd1352f788562bb532f3ed8f0ccfa5cd8ec598184e45bd58b6c"} Mar 18 13:05:57.443232 master-0 kubenswrapper[3938]: I0318 13:05:57.442525 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:57.443741 master-0 kubenswrapper[3938]: I0318 13:05:57.443713 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:57.443801 master-0 kubenswrapper[3938]: I0318 13:05:57.443743 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:57.443801 master-0 kubenswrapper[3938]: I0318 13:05:57.443741 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:57.443801 master-0 kubenswrapper[3938]: I0318 13:05:57.443778 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:57.443801 master-0 kubenswrapper[3938]: I0318 13:05:57.443792 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:57.443948 master-0 kubenswrapper[3938]: I0318 13:05:57.443754 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:57.974279 master-0 kubenswrapper[3938]: E0318 13:05:57.974139 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d326c1b65 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.114564965 +0000 UTC m=+0.650311780,LastTimestamp:2026-03-18 13:05:42.114564965 +0000 UTC m=+0.650311780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:57.980149 master-0 kubenswrapper[3938]: E0318 13:05:57.980019 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d36518184 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.1799305 +0000 UTC m=+0.715677315,LastTimestamp:2026-03-18 13:05:42.1799305 +0000 UTC m=+0.715677315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:57.984038 master-0 kubenswrapper[3938]: E0318 13:05:57.983861 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d365243db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.179980251 +0000 UTC m=+0.715727076,LastTimestamp:2026-03-18 13:05:42.179980251 +0000 UTC m=+0.715727076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:57.987849 master-0 kubenswrapper[3938]: E0318 13:05:57.987712 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d3652721c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.179992092 +0000 UTC m=+0.715738907,LastTimestamp:2026-03-18 13:05:42.179992092 +0000 UTC m=+0.715738907,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:57.991457 master-0 kubenswrapper[3938]: E0318 13:05:57.991375 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d478a8440 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.468879424 +0000 UTC m=+1.004626219,LastTimestamp:2026-03-18 13:05:42.468879424 +0000 UTC m=+1.004626219,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:57.996579 master-0 kubenswrapper[3938]: E0318 13:05:57.996511 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d36518184\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d36518184 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.1799305 +0000 UTC m=+0.715677315,LastTimestamp:2026-03-18 13:05:42.566659486 +0000 UTC m=+1.102406321,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.000255 master-0 kubenswrapper[3938]: E0318 13:05:58.000132 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d365243db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d365243db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.179980251 +0000 UTC m=+0.715727076,LastTimestamp:2026-03-18 13:05:42.566691647 +0000 UTC m=+1.102438482,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.004649 master-0 kubenswrapper[3938]: E0318 13:05:58.004559 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d3652721c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d3652721c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.179992092 +0000 UTC m=+0.715738907,LastTimestamp:2026-03-18 13:05:42.566707167 +0000 UTC m=+1.102454012,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.008172 master-0 kubenswrapper[3938]: E0318 13:05:58.008052 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d36518184\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d36518184 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.1799305 +0000 UTC m=+0.715677315,LastTimestamp:2026-03-18 13:05:42.643670207 +0000 UTC m=+1.179417052,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.012073 master-0 kubenswrapper[3938]: E0318 13:05:58.011885 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d365243db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d365243db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.179980251 +0000 UTC m=+0.715727076,LastTimestamp:2026-03-18 13:05:42.643703908 +0000 UTC m=+1.179450743,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.017305 master-0 kubenswrapper[3938]: E0318 13:05:58.017137 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d3652721c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d3652721c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.179992092 +0000 UTC m=+0.715738907,LastTimestamp:2026-03-18 13:05:42.643719759 +0000 UTC m=+1.179466594,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.021399 master-0 kubenswrapper[3938]: E0318 13:05:58.021310 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d36518184\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d36518184 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.1799305 +0000 UTC m=+0.715677315,LastTimestamp:2026-03-18 13:05:42.644996609 +0000 UTC m=+1.180743454,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.025423 master-0 kubenswrapper[3938]: E0318 13:05:58.025345 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d365243db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d365243db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.179980251 +0000 UTC m=+0.715727076,LastTimestamp:2026-03-18 13:05:42.645028289 +0000 UTC m=+1.180775134,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.028608 master-0 kubenswrapper[3938]: E0318 13:05:58.028542 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d3652721c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d3652721c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.179992092 +0000 UTC m=+0.715738907,LastTimestamp:2026-03-18 13:05:42.64504382 +0000 UTC m=+1.180790665,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.035700 master-0 kubenswrapper[3938]: E0318 13:05:58.035551 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d36518184\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d36518184 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.1799305 +0000 UTC m=+0.715677315,LastTimestamp:2026-03-18 13:05:42.646421732 +0000 UTC m=+1.182168577,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.040204 master-0 kubenswrapper[3938]: E0318 13:05:58.039959 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d365243db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d365243db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.179980251 +0000 UTC m=+0.715727076,LastTimestamp:2026-03-18 13:05:42.646453603 +0000 UTC m=+1.182200448,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.043734 master-0 kubenswrapper[3938]: E0318 13:05:58.043647 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d3652721c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d3652721c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.179992092 +0000 UTC m=+0.715738907,LastTimestamp:2026-03-18 13:05:42.646468484 +0000 UTC m=+1.182215319,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.047715 master-0 kubenswrapper[3938]: E0318 13:05:58.047650 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d36518184\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d36518184 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.1799305 +0000 UTC m=+0.715677315,LastTimestamp:2026-03-18 13:05:42.646769951 +0000 UTC m=+1.182516786,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.051212 master-0 kubenswrapper[3938]: E0318 13:05:58.051132 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d365243db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d365243db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.179980251 +0000 UTC m=+0.715727076,LastTimestamp:2026-03-18 13:05:42.646815332 +0000 UTC m=+1.182562167,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.054797 master-0 kubenswrapper[3938]: E0318 13:05:58.054734 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d3652721c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d3652721c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.179992092 +0000 UTC m=+0.715738907,LastTimestamp:2026-03-18 13:05:42.646835562 +0000 UTC m=+1.182582397,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.057997 master-0 kubenswrapper[3938]: E0318 13:05:58.057909 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d36518184\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d36518184 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.1799305 +0000 UTC m=+0.715677315,LastTimestamp:2026-03-18 13:05:42.647410906 +0000 UTC m=+1.183157741,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.062142 master-0 kubenswrapper[3938]: E0318 13:05:58.062054 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d365243db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d365243db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.179980251 +0000 UTC m=+0.715727076,LastTimestamp:2026-03-18 13:05:42.647509938 +0000 UTC m=+1.183256773,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.066436 master-0 kubenswrapper[3938]: E0318 13:05:58.065892 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d3652721c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d3652721c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.179992092 +0000 UTC m=+0.715738907,LastTimestamp:2026-03-18 13:05:42.647533649 +0000 UTC m=+1.183280484,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.070133 master-0 kubenswrapper[3938]: E0318 13:05:58.070031 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d36518184\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d36518184 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.1799305 +0000 UTC m=+0.715677315,LastTimestamp:2026-03-18 13:05:42.647851596 +0000 UTC m=+1.183598431,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.075085 master-0 kubenswrapper[3938]: E0318 13:05:58.074907 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14d365243db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14d365243db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:42.179980251 +0000 UTC m=+0.715727076,LastTimestamp:2026-03-18 13:05:42.647878087 +0000 UTC m=+1.183624932,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.081440 master-0 kubenswrapper[3938]: E0318 13:05:58.081331 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df14df4672015 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:45.369018389 +0000 UTC m=+3.904765244,LastTimestamp:2026-03-18 13:05:45.369018389 +0000 UTC m=+3.904765244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.086776 master-0 kubenswrapper[3938]: E0318 13:05:58.086635 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df14df57a46d0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:45.387050704 +0000 UTC m=+3.922797510,LastTimestamp:2026-03-18 13:05:45.387050704 +0000 UTC m=+3.922797510,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.090509 master-0 kubenswrapper[3938]: E0318 13:05:58.090433 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189df14df622f139 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:45.398104377 +0000 UTC m=+3.933851192,LastTimestamp:2026-03-18 13:05:45.398104377 +0000 UTC m=+3.933851192,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.093985 master-0 kubenswrapper[3938]: E0318 13:05:58.093677 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189df14df877fadf kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:45.437231839 +0000 UTC m=+3.972978644,LastTimestamp:2026-03-18 13:05:45.437231839 +0000 UTC m=+3.972978644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.098114 master-0 kubenswrapper[3938]: E0318 13:05:58.098005 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df14dfe5eccf5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:45.536244981 +0000 UTC m=+4.071991786,LastTimestamp:2026-03-18 13:05:45.536244981 +0000 UTC m=+4.071991786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.102215 master-0 kubenswrapper[3938]: E0318 13:05:58.102105 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df14ebb1f76ae openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" in 3.166s (3.166s including waiting). Image size: 465090934 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:48.70298795 +0000 UTC m=+7.238734755,LastTimestamp:2026-03-18 13:05:48.70298795 +0000 UTC m=+7.238734755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.105390 master-0 kubenswrapper[3938]: E0318 13:05:58.105299 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189df14ebbed28a3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" in 3.318s (3.318s including waiting). Image size: 529326739 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:48.716468387 +0000 UTC m=+7.252215192,LastTimestamp:2026-03-18 13:05:48.716468387 +0000 UTC m=+7.252215192,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.109253 master-0 kubenswrapper[3938]: E0318 13:05:58.109175 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189df14ec5c9af8a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:48.881915786 +0000 UTC m=+7.417662591,LastTimestamp:2026-03-18 13:05:48.881915786 +0000 UTC m=+7.417662591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.113295 master-0 kubenswrapper[3938]: E0318 13:05:58.113211 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df14ec5eeb238 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:48.884341304 +0000 UTC m=+7.420088109,LastTimestamp:2026-03-18 13:05:48.884341304 +0000 UTC m=+7.420088109,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.117141 master-0 kubenswrapper[3938]: E0318 13:05:58.117044 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189df14ec67ea80a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:48.893775882 +0000 UTC m=+7.429522677,LastTimestamp:2026-03-18 13:05:48.893775882 +0000 UTC m=+7.429522677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.121663 master-0 kubenswrapper[3938]: E0318 13:05:58.121511 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189df14ec6b29b38 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:48.897180472 +0000 UTC m=+7.432927277,LastTimestamp:2026-03-18 13:05:48.897180472 +0000 UTC m=+7.432927277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.125488 master-0 kubenswrapper[3938]: I0318 13:05:58.125459 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:58.125704 master-0 kubenswrapper[3938]: E0318 13:05:58.125596 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df14ec7f635fe openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:48.918388222 +0000 UTC m=+7.454135017,LastTimestamp:2026-03-18 13:05:48.918388222 +0000 UTC m=+7.454135017,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.128487 master-0 kubenswrapper[3938]: E0318 13:05:58.128383 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189df14ee2b0b71a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:49.366818586 +0000 UTC m=+7.902565401,LastTimestamp:2026-03-18 13:05:49.366818586 +0000 UTC m=+7.902565401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.129569 master-0 kubenswrapper[3938]: E0318 13:05:58.129502 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df14ee2d6e9f9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:49.369321977 +0000 UTC m=+7.905068822,LastTimestamp:2026-03-18 13:05:49.369321977 +0000 UTC m=+7.905068822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.133799 master-0 kubenswrapper[3938]: E0318 13:05:58.133642 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189df14ee648577a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:49.427087226 +0000 UTC m=+7.962834041,LastTimestamp:2026-03-18 13:05:49.427087226 +0000 UTC m=+7.962834041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.137564 master-0 kubenswrapper[3938]: E0318 13:05:58.137405 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df14ef0a7350d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:49.601076493 +0000 UTC m=+8.136823298,LastTimestamp:2026-03-18 13:05:49.601076493 +0000 UTC m=+8.136823298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.141340 master-0 kubenswrapper[3938]: E0318 13:05:58.141157 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df14ef1ee8f0b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:49.622529803 +0000 UTC m=+8.158276618,LastTimestamp:2026-03-18 13:05:49.622529803 +0000 UTC m=+8.158276618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.146879 master-0 kubenswrapper[3938]: E0318 13:05:58.146764 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189df14ee2d6e9f9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df14ee2d6e9f9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:49.369321977 +0000 UTC m=+7.905068822,LastTimestamp:2026-03-18 13:05:52.463642845 +0000 UTC m=+10.999389680,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.151516 master-0 kubenswrapper[3938]: E0318 13:05:58.151373 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189df14fa1596a7f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 7.128s (7.128s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:52.565545599 +0000 UTC m=+11.101292434,LastTimestamp:2026-03-18 13:05:52.565545599 +0000 UTC m=+11.101292434,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.151909 master-0 kubenswrapper[3938]: I0318 13:05:58.151869 3938 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:58.156559 master-0 kubenswrapper[3938]: E0318 13:05:58.156445 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df14fa34292fd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 7.228s (7.228s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:52.597603069 +0000 UTC m=+11.133349864,LastTimestamp:2026-03-18 13:05:52.597603069 +0000 UTC m=+11.133349864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.161144 master-0 kubenswrapper[3938]: E0318 13:05:58.161036 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df14fa35d532e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 7.212s (7.212s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:52.599356206 +0000 UTC m=+11.135103021,LastTimestamp:2026-03-18 13:05:52.599356206 +0000 UTC m=+11.135103021,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.165110 master-0 kubenswrapper[3938]: E0318 13:05:58.165028 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189df14ef0a7350d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df14ef0a7350d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:49.601076493 +0000 UTC m=+8.136823298,LastTimestamp:2026-03-18 13:05:52.688593939 +0000 UTC m=+11.224340754,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.168722 master-0 kubenswrapper[3938]: E0318 13:05:58.168620 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189df14ef1ee8f0b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df14ef1ee8f0b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:49.622529803 +0000 UTC m=+8.158276618,LastTimestamp:2026-03-18 13:05:52.704151598 +0000 UTC m=+11.239898403,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.172014 master-0 kubenswrapper[3938]: E0318 13:05:58.171923 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189df14fab8ba7da kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:52.736610266 +0000 UTC m=+11.272357071,LastTimestamp:2026-03-18 13:05:52.736610266 +0000 UTC m=+11.272357071,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.175500 master-0 kubenswrapper[3938]: E0318 13:05:58.175392 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189df14fabf6e5ec kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:52.743638508 +0000 UTC m=+11.279385313,LastTimestamp:2026-03-18 13:05:52.743638508 +0000 UTC m=+11.279385313,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.178995 master-0 kubenswrapper[3938]: E0318 13:05:58.178875 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df14fb0cd9a76 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:52.824818294 +0000 UTC m=+11.360565099,LastTimestamp:2026-03-18 13:05:52.824818294 +0000 UTC m=+11.360565099,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.182288 master-0 kubenswrapper[3938]: E0318 13:05:58.182177 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df14fb139b989 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:52.831904137 +0000 UTC m=+11.367650962,LastTimestamp:2026-03-18 13:05:52.831904137 +0000 UTC m=+11.367650962,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.185839 master-0 kubenswrapper[3938]: E0318 13:05:58.185721 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df14fb14de8ae kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:52.833226926 +0000 UTC m=+11.368973731,LastTimestamp:2026-03-18 13:05:52.833226926 +0000 UTC m=+11.368973731,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.189532 master-0 kubenswrapper[3938]: E0318 13:05:58.189432 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df14fb1ebf20f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:52.843584015 +0000 UTC m=+11.379330810,LastTimestamp:2026-03-18 13:05:52.843584015 +0000 UTC m=+11.379330810,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.193060 master-0 kubenswrapper[3938]: E0318 13:05:58.192961 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df14fb2ae1478 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:52.856306808 +0000 UTC m=+11.392053613,LastTimestamp:2026-03-18 13:05:52.856306808 +0000 UTC m=+11.392053613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.196955 master-0 kubenswrapper[3938]: E0318 13:05:58.196817 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df14fd2589ce8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:53.387576552 +0000 UTC m=+11.923323397,LastTimestamp:2026-03-18 13:05:53.387576552 +0000 UTC m=+11.923323397,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.200875 master-0 kubenswrapper[3938]: E0318 13:05:58.200788 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df14fd293c068 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:53.391452264 +0000 UTC m=+11.927199109,LastTimestamp:2026-03-18 13:05:53.391452264 +0000 UTC m=+11.927199109,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.204300 master-0 kubenswrapper[3938]: E0318 13:05:58.204221 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df14fdf0850f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:53.60041804 +0000 UTC m=+12.136164845,LastTimestamp:2026-03-18 13:05:53.60041804 +0000 UTC m=+12.136164845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.207359 master-0 kubenswrapper[3938]: E0318 13:05:58.207294 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df14fdf85f981 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:53.608653185 +0000 UTC m=+12.144399990,LastTimestamp:2026-03-18 13:05:53.608653185 +0000 UTC m=+12.144399990,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.211040 master-0 kubenswrapper[3938]: E0318 13:05:58.210924 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df14fdf9170fc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:53.609404668 +0000 UTC m=+12.145151473,LastTimestamp:2026-03-18 13:05:53.609404668 +0000 UTC m=+12.145151473,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.214670 master-0 kubenswrapper[3938]: E0318 13:05:58.214566 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189df14fd293c068\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df14fd293c068 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:53.391452264 +0000 UTC m=+11.927199109,LastTimestamp:2026-03-18 13:05:54.41425169 +0000 UTC m=+12.949998485,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.219034 master-0 kubenswrapper[3938]: E0318 13:05:58.218909 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df15064eecae5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\" in 3.013s (3.013s including waiting). Image size: 505246690 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:55.846892261 +0000 UTC m=+14.382639066,LastTimestamp:2026-03-18 13:05:55.846892261 +0000 UTC m=+14.382639066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.222738 master-0 kubenswrapper[3938]: E0318 13:05:58.222606 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df15065ea6157 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" in 2.253s (2.253s including waiting). Image size: 514984269 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:55.863380311 +0000 UTC m=+14.399127116,LastTimestamp:2026-03-18 13:05:55.863380311 +0000 UTC m=+14.399127116,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.226519 master-0 kubenswrapper[3938]: E0318 13:05:58.226387 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df15070839c16 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:56.041194518 +0000 UTC m=+14.576941323,LastTimestamp:2026-03-18 13:05:56.041194518 +0000 UTC m=+14.576941323,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.230349 master-0 kubenswrapper[3938]: E0318 13:05:58.230258 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df15070b8fcd5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:56.044692693 +0000 UTC m=+14.580439498,LastTimestamp:2026-03-18 13:05:56.044692693 +0000 UTC m=+14.580439498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.233796 master-0 kubenswrapper[3938]: E0318 13:05:58.233730 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df1507136684f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:56.052912207 +0000 UTC m=+14.588659012,LastTimestamp:2026-03-18 13:05:56.052912207 +0000 UTC m=+14.588659012,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.237742 master-0 kubenswrapper[3938]: E0318 13:05:58.237644 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df15071568b46 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:56.05501831 +0000 UTC m=+14.590765115,LastTimestamp:2026-03-18 13:05:56.05501831 +0000 UTC m=+14.590765115,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.241757 master-0 kubenswrapper[3938]: E0318 13:05:58.241638 3938 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df15088523df5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:56.440612341 +0000 UTC m=+14.976359146,LastTimestamp:2026-03-18 13:05:56.440612341 +0000 UTC m=+14.976359146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.245358 master-0 kubenswrapper[3938]: E0318 13:05:58.245283 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189df14fb0cd9a76\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df14fb0cd9a76 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:52.824818294 +0000 UTC m=+11.360565099,LastTimestamp:2026-03-18 13:05:56.619770687 +0000 UTC m=+15.155517502,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.249082 master-0 kubenswrapper[3938]: E0318 13:05:58.249030 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189df14fb139b989\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df14fb139b989 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:52.831904137 +0000 UTC m=+11.367650962,LastTimestamp:2026-03-18 13:05:56.628151788 +0000 UTC m=+15.163898593,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:58.444118 master-0 kubenswrapper[3938]: I0318 13:05:58.444063 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:58.444731 master-0 kubenswrapper[3938]: I0318 13:05:58.444641 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:58.444731 master-0 kubenswrapper[3938]: I0318 13:05:58.444661 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:58.444731 master-0 kubenswrapper[3938]: I0318 13:05:58.444670 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:59.128598 master-0 kubenswrapper[3938]: I0318 13:05:59.128547 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:59.372479 master-0 kubenswrapper[3938]: W0318 13:05:59.372415 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 18 13:05:59.372752 master-0 kubenswrapper[3938]: E0318 13:05:59.372485 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 18 13:05:59.446194 master-0 kubenswrapper[3938]: I0318 13:05:59.446109 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:59.447282 master-0 kubenswrapper[3938]: I0318 13:05:59.447246 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:59.447282 master-0 kubenswrapper[3938]: I0318 13:05:59.447277 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:59.447282 master-0 kubenswrapper[3938]: I0318 13:05:59.447286 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:59.634522 master-0 kubenswrapper[3938]: I0318 13:05:59.634405 3938 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:59.634808 master-0 kubenswrapper[3938]: I0318 13:05:59.634629 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:59.636129 master-0 kubenswrapper[3938]: I0318 13:05:59.636085 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:59.636212 master-0 kubenswrapper[3938]: I0318 13:05:59.636134 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:59.636212 master-0 kubenswrapper[3938]: I0318 13:05:59.636151 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:06:00.002094 master-0 kubenswrapper[3938]: I0318 13:06:00.002035 3938 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:06:00.007759 master-0 kubenswrapper[3938]: I0318 13:06:00.007709 3938 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:06:00.130152 master-0 kubenswrapper[3938]: I0318 13:06:00.129797 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:06:00.410200 master-0 kubenswrapper[3938]: W0318 13:06:00.410070 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 18 13:06:00.410200 master-0 kubenswrapper[3938]: E0318 13:06:00.410148 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 18 13:06:00.448021 master-0 kubenswrapper[3938]: I0318 13:06:00.447911 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:06:00.448739 master-0 kubenswrapper[3938]: I0318 13:06:00.448708 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:06:00.448739 master-0 kubenswrapper[3938]: I0318 13:06:00.448741 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:06:00.448831 master-0 kubenswrapper[3938]: I0318 13:06:00.448751 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:06:00.454129 master-0 kubenswrapper[3938]: I0318 13:06:00.454108 3938 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:06:00.949055 master-0 kubenswrapper[3938]: W0318 13:06:00.948999 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 18 13:06:00.949055 master-0 kubenswrapper[3938]: E0318 13:06:00.949059 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 18 13:06:01.127715 master-0 kubenswrapper[3938]: I0318 13:06:01.127646 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:06:01.450145 master-0 kubenswrapper[3938]: I0318 13:06:01.450084 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:06:01.450914 master-0 kubenswrapper[3938]: I0318 13:06:01.450872 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:06:01.451016 master-0 kubenswrapper[3938]: I0318 13:06:01.450922 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:06:01.451016 master-0 kubenswrapper[3938]: I0318 13:06:01.450951 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:06:02.128860 master-0 kubenswrapper[3938]: I0318 13:06:02.128732 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:06:02.177376 master-0 kubenswrapper[3938]: E0318 13:06:02.177332 3938 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 13:06:02.194006 master-0 kubenswrapper[3938]: I0318 13:06:02.193626 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:06:02.194667 master-0 kubenswrapper[3938]: I0318 13:06:02.194627 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:06:02.194730 master-0 kubenswrapper[3938]: I0318 13:06:02.194683 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:06:02.194764 master-0 kubenswrapper[3938]: I0318 13:06:02.194735 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:06:02.194825 master-0 kubenswrapper[3938]: I0318 13:06:02.194800 3938 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:06:02.199482 master-0 kubenswrapper[3938]: E0318 13:06:02.199435 3938 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 13:06:02.452639 master-0 kubenswrapper[3938]: I0318 13:06:02.452596 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:06:02.453345 master-0 kubenswrapper[3938]: I0318 13:06:02.453304 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:06:02.453345 master-0 kubenswrapper[3938]: I0318 13:06:02.453339 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:06:02.453447 master-0 kubenswrapper[3938]: I0318 13:06:02.453350 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:06:02.467828 master-0 kubenswrapper[3938]: E0318 13:06:02.467767 3938 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 13:06:02.501520 master-0 kubenswrapper[3938]: I0318 13:06:02.501444 3938 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:06:02.501730 master-0 kubenswrapper[3938]: I0318 13:06:02.501577 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:06:02.502543 master-0 kubenswrapper[3938]: I0318 13:06:02.502501 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:06:02.502543 master-0 kubenswrapper[3938]: I0318 13:06:02.502534 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:06:02.502680 master-0 kubenswrapper[3938]: I0318 13:06:02.502549 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:06:02.973091 master-0 kubenswrapper[3938]: I0318 13:06:02.972976 3938 csr.go:261] certificate signing request csr-txd8d is approved, waiting to be issued Mar 18 13:06:03.130461 master-0 kubenswrapper[3938]: I0318 13:06:03.130372 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:06:03.298564 master-0 kubenswrapper[3938]: W0318 13:06:03.298424 3938 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 18 13:06:03.298564 master-0 kubenswrapper[3938]: E0318 13:06:03.298503 3938 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 18 13:06:04.109651 master-0 kubenswrapper[3938]: I0318 13:06:04.109527 3938 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:06:04.110863 master-0 kubenswrapper[3938]: I0318 13:06:04.109745 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:06:04.111138 master-0 kubenswrapper[3938]: I0318 13:06:04.111079 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:06:04.111138 master-0 kubenswrapper[3938]: I0318 13:06:04.111134 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:06:04.111356 master-0 kubenswrapper[3938]: I0318 13:06:04.111149 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:06:04.115817 master-0 kubenswrapper[3938]: I0318 13:06:04.115753 3938 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:06:04.127423 master-0 kubenswrapper[3938]: I0318 13:06:04.127334 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:06:04.458147 master-0 kubenswrapper[3938]: I0318 13:06:04.457870 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:06:04.459395 master-0 kubenswrapper[3938]: I0318 13:06:04.459337 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:06:04.459497 master-0 kubenswrapper[3938]: I0318 13:06:04.459401 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:06:04.459497 master-0 kubenswrapper[3938]: I0318 13:06:04.459425 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:06:05.128173 master-0 kubenswrapper[3938]: I0318 13:06:05.128102 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:06:05.814780 master-0 kubenswrapper[3938]: I0318 13:06:05.814648 3938 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:06:05.815134 master-0 kubenswrapper[3938]: I0318 13:06:05.814900 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:06:05.816216 master-0 kubenswrapper[3938]: I0318 13:06:05.816166 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:06:05.816216 master-0 kubenswrapper[3938]: I0318 13:06:05.816207 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:06:05.816216 master-0 kubenswrapper[3938]: I0318 13:06:05.816223 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:06:05.818369 master-0 kubenswrapper[3938]: I0318 13:06:05.818328 3938 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:06:06.128423 master-0 kubenswrapper[3938]: I0318 13:06:06.128257 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:06:06.462359 master-0 kubenswrapper[3938]: I0318 13:06:06.462273 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:06:06.463752 master-0 kubenswrapper[3938]: I0318 13:06:06.463588 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:06:06.463752 master-0 kubenswrapper[3938]: I0318 13:06:06.463666 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:06:06.463752 master-0 kubenswrapper[3938]: I0318 13:06:06.463689 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:06:06.466844 master-0 kubenswrapper[3938]: I0318 13:06:06.466795 3938 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:06:06.469240 master-0 kubenswrapper[3938]: I0318 13:06:06.469191 3938 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:06:07.127885 master-0 kubenswrapper[3938]: I0318 13:06:07.127810 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:06:07.464283 master-0 kubenswrapper[3938]: I0318 13:06:07.464220 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:06:07.465193 master-0 kubenswrapper[3938]: I0318 13:06:07.465134 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:06:07.465193 master-0 kubenswrapper[3938]: I0318 13:06:07.465193 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:06:07.465193 master-0 kubenswrapper[3938]: I0318 13:06:07.465211 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:06:08.128713 master-0 kubenswrapper[3938]: I0318 13:06:08.128635 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:06:08.467082 master-0 kubenswrapper[3938]: I0318 13:06:08.466925 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:06:08.468059 master-0 kubenswrapper[3938]: I0318 13:06:08.468009 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:06:08.468465 master-0 kubenswrapper[3938]: I0318 13:06:08.468075 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:06:08.468465 master-0 kubenswrapper[3938]: I0318 13:06:08.468090 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:06:09.128734 master-0 kubenswrapper[3938]: I0318 13:06:09.128669 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:06:09.182706 master-0 kubenswrapper[3938]: E0318 13:06:09.182617 3938 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 13:06:09.199813 master-0 kubenswrapper[3938]: I0318 13:06:09.199723 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:06:09.202168 master-0 kubenswrapper[3938]: I0318 13:06:09.201475 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:06:09.202168 master-0 kubenswrapper[3938]: I0318 13:06:09.201526 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:06:09.202168 master-0 kubenswrapper[3938]: I0318 13:06:09.201548 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:06:09.202168 master-0 kubenswrapper[3938]: I0318 13:06:09.201607 3938 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:06:09.207442 master-0 kubenswrapper[3938]: E0318 13:06:09.207377 3938 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 13:06:09.342226 master-0 kubenswrapper[3938]: I0318 13:06:09.342125 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:06:09.343285 master-0 kubenswrapper[3938]: I0318 13:06:09.343258 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:06:09.343285 master-0 kubenswrapper[3938]: I0318 13:06:09.343294 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:06:09.343574 master-0 kubenswrapper[3938]: I0318 13:06:09.343306 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:06:09.343698 master-0 kubenswrapper[3938]: I0318 13:06:09.343677 3938 scope.go:117] "RemoveContainer" containerID="bf1e8fd07a2ff43cfd320277fec3c6a5df3eeac3fccd083d6ab8482272a09365" Mar 18 13:06:09.352077 master-0 kubenswrapper[3938]: E0318 13:06:09.351839 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189df14ee2d6e9f9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df14ee2d6e9f9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:49.369321977 +0000 UTC m=+7.905068822,LastTimestamp:2026-03-18 13:06:09.34643847 +0000 UTC m=+27.882185275,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:06:09.552787 master-0 kubenswrapper[3938]: E0318 13:06:09.552673 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189df14ef0a7350d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df14ef0a7350d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:49.601076493 +0000 UTC m=+8.136823298,LastTimestamp:2026-03-18 13:06:09.548132853 +0000 UTC m=+28.083879658,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:06:09.604041 master-0 kubenswrapper[3938]: E0318 13:06:09.603763 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189df14ef1ee8f0b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df14ef1ee8f0b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:49.622529803 +0000 UTC m=+8.158276618,LastTimestamp:2026-03-18 13:06:09.59369574 +0000 UTC m=+28.129442585,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:06:10.128186 master-0 kubenswrapper[3938]: I0318 13:06:10.128096 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:06:10.474544 master-0 kubenswrapper[3938]: I0318 13:06:10.474448 3938 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 13:06:10.475466 master-0 kubenswrapper[3938]: I0318 13:06:10.475420 3938 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 13:06:10.476197 master-0 kubenswrapper[3938]: I0318 13:06:10.476135 3938 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="0cd8e6f707daec133a61d96c86c0549a9d428fdbed2e6e436300d058f6e9d875" exitCode=1 Mar 18 13:06:10.476317 master-0 kubenswrapper[3938]: I0318 13:06:10.476197 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"0cd8e6f707daec133a61d96c86c0549a9d428fdbed2e6e436300d058f6e9d875"} Mar 18 13:06:10.476317 master-0 kubenswrapper[3938]: I0318 13:06:10.476276 3938 scope.go:117] "RemoveContainer" containerID="bf1e8fd07a2ff43cfd320277fec3c6a5df3eeac3fccd083d6ab8482272a09365" Mar 18 13:06:10.476544 master-0 kubenswrapper[3938]: I0318 13:06:10.476482 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:06:10.477738 master-0 kubenswrapper[3938]: I0318 13:06:10.477679 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:06:10.477738 master-0 kubenswrapper[3938]: I0318 13:06:10.477735 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:06:10.477738 master-0 kubenswrapper[3938]: I0318 13:06:10.477752 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:06:10.478230 master-0 kubenswrapper[3938]: I0318 13:06:10.478193 3938 scope.go:117] "RemoveContainer" containerID="0cd8e6f707daec133a61d96c86c0549a9d428fdbed2e6e436300d058f6e9d875" Mar 18 13:06:10.478532 master-0 kubenswrapper[3938]: E0318 13:06:10.478487 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 13:06:10.485985 master-0 kubenswrapper[3938]: E0318 13:06:10.485810 3938 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189df14fd293c068\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df14fd293c068 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:53.391452264 +0000 UTC m=+11.927199109,LastTimestamp:2026-03-18 13:06:10.478446904 +0000 UTC m=+29.014193739,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:06:11.136407 master-0 kubenswrapper[3938]: I0318 13:06:11.136332 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:06:11.482854 master-0 kubenswrapper[3938]: I0318 13:06:11.482794 3938 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 13:06:12.127011 master-0 kubenswrapper[3938]: I0318 13:06:12.126909 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:06:12.468813 master-0 kubenswrapper[3938]: E0318 13:06:12.468600 3938 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 13:06:13.128132 master-0 kubenswrapper[3938]: I0318 13:06:13.128091 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:06:14.128017 master-0 kubenswrapper[3938]: I0318 13:06:14.127895 3938 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:06:15.034212 master-0 kubenswrapper[3938]: I0318 13:06:15.034153 3938 csr.go:257] certificate signing request csr-txd8d is issued Mar 18 13:06:15.134313 master-0 kubenswrapper[3938]: I0318 13:06:15.134259 3938 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:06:15.150245 master-0 kubenswrapper[3938]: I0318 13:06:15.150211 3938 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:06:15.207765 master-0 kubenswrapper[3938]: I0318 13:06:15.207708 3938 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:06:15.467805 master-0 kubenswrapper[3938]: I0318 13:06:15.467742 3938 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:06:15.467805 master-0 kubenswrapper[3938]: E0318 13:06:15.467790 3938 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 13:06:15.492741 master-0 kubenswrapper[3938]: I0318 13:06:15.492701 3938 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:06:15.510647 master-0 kubenswrapper[3938]: I0318 13:06:15.510615 3938 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:06:15.572499 master-0 kubenswrapper[3938]: I0318 13:06:15.572466 3938 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:06:15.848412 master-0 kubenswrapper[3938]: I0318 13:06:15.848279 3938 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:06:15.848412 master-0 kubenswrapper[3938]: E0318 13:06:15.848330 3938 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 13:06:15.918346 master-0 kubenswrapper[3938]: I0318 13:06:15.918245 3938 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 18 13:06:15.959157 master-0 kubenswrapper[3938]: I0318 13:06:15.959116 3938 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:06:15.961660 master-0 kubenswrapper[3938]: I0318 13:06:15.961621 3938 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 13:06:15.978114 master-0 kubenswrapper[3938]: I0318 13:06:15.978076 3938 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:06:16.033787 master-0 kubenswrapper[3938]: I0318 13:06:16.033743 3938 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:06:16.037634 master-0 kubenswrapper[3938]: I0318 13:06:16.036955 3938 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 12:57:27 +0000 UTC, rotation deadline is 2026-03-19 09:29:10.776785845 +0000 UTC Mar 18 13:06:16.038205 master-0 kubenswrapper[3938]: I0318 13:06:16.038166 3938 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h22m54.738654221s for next certificate rotation Mar 18 13:06:16.192104 master-0 kubenswrapper[3938]: E0318 13:06:16.192066 3938 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 18 13:06:16.208387 master-0 kubenswrapper[3938]: I0318 13:06:16.208306 3938 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:06:16.209908 master-0 kubenswrapper[3938]: I0318 13:06:16.209887 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:06:16.210101 master-0 kubenswrapper[3938]: I0318 13:06:16.210089 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:06:16.210191 master-0 kubenswrapper[3938]: I0318 13:06:16.210175 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:06:16.210395 master-0 kubenswrapper[3938]: I0318 13:06:16.210382 3938 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:06:16.245459 master-0 kubenswrapper[3938]: I0318 13:06:16.245407 3938 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 13:06:16.245806 master-0 kubenswrapper[3938]: E0318 13:06:16.245792 3938 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 18 13:06:16.258687 master-0 kubenswrapper[3938]: E0318 13:06:16.258631 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:16.359643 master-0 kubenswrapper[3938]: E0318 13:06:16.359566 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:16.460670 master-0 kubenswrapper[3938]: E0318 13:06:16.460519 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:16.561616 master-0 kubenswrapper[3938]: E0318 13:06:16.561570 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:16.662593 master-0 kubenswrapper[3938]: E0318 13:06:16.662529 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:16.763642 master-0 kubenswrapper[3938]: E0318 13:06:16.763445 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:16.864377 master-0 kubenswrapper[3938]: E0318 13:06:16.864259 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:16.965392 master-0 kubenswrapper[3938]: E0318 13:06:16.965323 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:17.066401 master-0 kubenswrapper[3938]: E0318 13:06:17.066228 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:17.167228 master-0 kubenswrapper[3938]: E0318 13:06:17.167133 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:17.172126 master-0 kubenswrapper[3938]: I0318 13:06:17.172090 3938 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 18 13:06:17.198899 master-0 kubenswrapper[3938]: I0318 13:06:17.198828 3938 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 18 13:06:17.278191 master-0 kubenswrapper[3938]: E0318 13:06:17.278144 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:17.379464 master-0 kubenswrapper[3938]: E0318 13:06:17.379350 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:17.480542 master-0 kubenswrapper[3938]: E0318 13:06:17.480488 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:17.581613 master-0 kubenswrapper[3938]: E0318 13:06:17.581502 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:17.681918 master-0 kubenswrapper[3938]: E0318 13:06:17.681756 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:17.782629 master-0 kubenswrapper[3938]: E0318 13:06:17.782564 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:17.883075 master-0 kubenswrapper[3938]: E0318 13:06:17.883019 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:17.984450 master-0 kubenswrapper[3938]: E0318 13:06:17.984191 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:18.085303 master-0 kubenswrapper[3938]: E0318 13:06:18.085244 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:18.184567 master-0 kubenswrapper[3938]: I0318 13:06:18.182982 3938 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 13:06:18.186089 master-0 kubenswrapper[3938]: E0318 13:06:18.186030 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:18.286597 master-0 kubenswrapper[3938]: E0318 13:06:18.286416 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:18.386956 master-0 kubenswrapper[3938]: E0318 13:06:18.386848 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:18.487852 master-0 kubenswrapper[3938]: E0318 13:06:18.487774 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:18.588062 master-0 kubenswrapper[3938]: E0318 13:06:18.587873 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:18.688830 master-0 kubenswrapper[3938]: E0318 13:06:18.688784 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:18.789534 master-0 kubenswrapper[3938]: E0318 13:06:18.789465 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:18.890520 master-0 kubenswrapper[3938]: E0318 13:06:18.890349 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:18.991264 master-0 kubenswrapper[3938]: E0318 13:06:18.991186 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:19.091514 master-0 kubenswrapper[3938]: E0318 13:06:19.091420 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:19.191743 master-0 kubenswrapper[3938]: E0318 13:06:19.191671 3938 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:06:19.240480 master-0 kubenswrapper[3938]: I0318 13:06:19.240411 3938 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 13:06:20.119259 master-0 kubenswrapper[3938]: I0318 13:06:20.119204 3938 apiserver.go:52] "Watching apiserver" Mar 18 13:06:20.128226 master-0 kubenswrapper[3938]: I0318 13:06:20.128182 3938 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 13:06:20.128681 master-0 kubenswrapper[3938]: I0318 13:06:20.128604 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm","openshift-network-operator/network-operator-7bd846bfc4-mk4d5"] Mar 18 13:06:20.129230 master-0 kubenswrapper[3938]: I0318 13:06:20.129198 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:20.129598 master-0 kubenswrapper[3938]: I0318 13:06:20.129213 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:06:20.133702 master-0 kubenswrapper[3938]: I0318 13:06:20.133667 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 13:06:20.134297 master-0 kubenswrapper[3938]: I0318 13:06:20.134219 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 13:06:20.134590 master-0 kubenswrapper[3938]: I0318 13:06:20.134339 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 13:06:20.137136 master-0 kubenswrapper[3938]: I0318 13:06:20.135325 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 13:06:20.137136 master-0 kubenswrapper[3938]: I0318 13:06:20.135391 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 13:06:20.137136 master-0 kubenswrapper[3938]: I0318 13:06:20.135413 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 13:06:20.226146 master-0 kubenswrapper[3938]: I0318 13:06:20.226041 3938 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 13:06:20.250221 master-0 kubenswrapper[3938]: I0318 13:06:20.250169 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-m2vzq"] Mar 18 13:06:20.250472 master-0 kubenswrapper[3938]: I0318 13:06:20.250448 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:20.254362 master-0 kubenswrapper[3938]: I0318 13:06:20.253976 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 18 13:06:20.254362 master-0 kubenswrapper[3938]: I0318 13:06:20.254042 3938 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 18 13:06:20.254362 master-0 kubenswrapper[3938]: I0318 13:06:20.254080 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 18 13:06:20.254964 master-0 kubenswrapper[3938]: I0318 13:06:20.254560 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 18 13:06:20.309761 master-0 kubenswrapper[3938]: I0318 13:06:20.309685 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a0944d2-d99a-42eb-81f5-a212b750b8f4-metrics-tls\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:06:20.309761 master-0 kubenswrapper[3938]: I0318 13:06:20.309759 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-882b8\" (UniqueName: \"kubernetes.io/projected/8a0944d2-d99a-42eb-81f5-a212b750b8f4-kube-api-access-882b8\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:06:20.310211 master-0 kubenswrapper[3938]: I0318 13:06:20.309797 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:20.310211 master-0 kubenswrapper[3938]: I0318 13:06:20.309831 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:20.310211 master-0 kubenswrapper[3938]: I0318 13:06:20.309856 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/162e25c0-761c-4414-8c29-f6931afdb7b2-service-ca\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:20.310211 master-0 kubenswrapper[3938]: I0318 13:06:20.309880 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/162e25c0-761c-4414-8c29-f6931afdb7b2-kube-api-access\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:20.310211 master-0 kubenswrapper[3938]: I0318 13:06:20.309907 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8a0944d2-d99a-42eb-81f5-a212b750b8f4-host-etc-kube\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:06:20.310211 master-0 kubenswrapper[3938]: I0318 13:06:20.309930 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:20.410987 master-0 kubenswrapper[3938]: I0318 13:06:20.410790 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:20.410987 master-0 kubenswrapper[3938]: I0318 13:06:20.410851 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-882b8\" (UniqueName: \"kubernetes.io/projected/8a0944d2-d99a-42eb-81f5-a212b750b8f4-kube-api-access-882b8\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:06:20.411298 master-0 kubenswrapper[3938]: I0318 13:06:20.410999 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:20.411298 master-0 kubenswrapper[3938]: I0318 13:06:20.411094 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:20.411298 master-0 kubenswrapper[3938]: I0318 13:06:20.411247 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-ca-bundle\") pod \"assisted-installer-controller-m2vzq\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:20.411298 master-0 kubenswrapper[3938]: I0318 13:06:20.411246 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:20.411298 master-0 kubenswrapper[3938]: I0318 13:06:20.411278 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-resolv-conf\") pod \"assisted-installer-controller-m2vzq\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:20.411470 master-0 kubenswrapper[3938]: I0318 13:06:20.411336 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-var-run-resolv-conf\") pod \"assisted-installer-controller-m2vzq\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:20.411470 master-0 kubenswrapper[3938]: I0318 13:06:20.411384 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8a0944d2-d99a-42eb-81f5-a212b750b8f4-host-etc-kube\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:06:20.411470 master-0 kubenswrapper[3938]: I0318 13:06:20.411413 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/162e25c0-761c-4414-8c29-f6931afdb7b2-kube-api-access\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:20.411612 master-0 kubenswrapper[3938]: I0318 13:06:20.411567 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a0944d2-d99a-42eb-81f5-a212b750b8f4-metrics-tls\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:06:20.411675 master-0 kubenswrapper[3938]: I0318 13:06:20.411579 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8a0944d2-d99a-42eb-81f5-a212b750b8f4-host-etc-kube\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:06:20.411675 master-0 kubenswrapper[3938]: I0318 13:06:20.411608 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:20.411847 master-0 kubenswrapper[3938]: E0318 13:06:20.411792 3938 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:20.411915 master-0 kubenswrapper[3938]: E0318 13:06:20.411892 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert podName:162e25c0-761c-4414-8c29-f6931afdb7b2 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:20.911861087 +0000 UTC m=+39.447607892 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert") pod "cluster-version-operator-56d8475767-l6hzm" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:20.412023 master-0 kubenswrapper[3938]: I0318 13:06:20.411960 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-sno-bootstrap-files\") pod \"assisted-installer-controller-m2vzq\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:20.412023 master-0 kubenswrapper[3938]: I0318 13:06:20.412008 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/162e25c0-761c-4414-8c29-f6931afdb7b2-service-ca\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:20.412085 master-0 kubenswrapper[3938]: I0318 13:06:20.412038 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n2sf\" (UniqueName: \"kubernetes.io/projected/c0403564-f8d9-4d81-b9e3-d9028fe58590-kube-api-access-4n2sf\") pod \"assisted-installer-controller-m2vzq\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:20.413108 master-0 kubenswrapper[3938]: I0318 13:06:20.413078 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/162e25c0-761c-4414-8c29-f6931afdb7b2-service-ca\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:20.413297 master-0 kubenswrapper[3938]: I0318 13:06:20.413235 3938 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 13:06:20.422209 master-0 kubenswrapper[3938]: I0318 13:06:20.421660 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a0944d2-d99a-42eb-81f5-a212b750b8f4-metrics-tls\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:06:20.440030 master-0 kubenswrapper[3938]: I0318 13:06:20.439797 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/162e25c0-761c-4414-8c29-f6931afdb7b2-kube-api-access\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:20.440030 master-0 kubenswrapper[3938]: I0318 13:06:20.439788 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-882b8\" (UniqueName: \"kubernetes.io/projected/8a0944d2-d99a-42eb-81f5-a212b750b8f4-kube-api-access-882b8\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:06:20.451573 master-0 kubenswrapper[3938]: I0318 13:06:20.451501 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:06:20.464545 master-0 kubenswrapper[3938]: W0318 13:06:20.464479 3938 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a0944d2_d99a_42eb_81f5_a212b750b8f4.slice/crio-7d26e6fa0dd7672f718650a12401a8514bc9d4479825421f550493d0cc0ccae9 WatchSource:0}: Error finding container 7d26e6fa0dd7672f718650a12401a8514bc9d4479825421f550493d0cc0ccae9: Status 404 returned error can't find the container with id 7d26e6fa0dd7672f718650a12401a8514bc9d4479825421f550493d0cc0ccae9 Mar 18 13:06:20.504601 master-0 kubenswrapper[3938]: I0318 13:06:20.504237 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" event={"ID":"8a0944d2-d99a-42eb-81f5-a212b750b8f4","Type":"ContainerStarted","Data":"7d26e6fa0dd7672f718650a12401a8514bc9d4479825421f550493d0cc0ccae9"} Mar 18 13:06:20.512711 master-0 kubenswrapper[3938]: I0318 13:06:20.512627 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-sno-bootstrap-files\") pod \"assisted-installer-controller-m2vzq\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:20.512711 master-0 kubenswrapper[3938]: I0318 13:06:20.512699 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n2sf\" (UniqueName: \"kubernetes.io/projected/c0403564-f8d9-4d81-b9e3-d9028fe58590-kube-api-access-4n2sf\") pod \"assisted-installer-controller-m2vzq\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:20.512874 master-0 kubenswrapper[3938]: I0318 13:06:20.512724 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-ca-bundle\") pod \"assisted-installer-controller-m2vzq\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:20.512874 master-0 kubenswrapper[3938]: I0318 13:06:20.512752 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-resolv-conf\") pod \"assisted-installer-controller-m2vzq\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:20.512980 master-0 kubenswrapper[3938]: I0318 13:06:20.512910 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-var-run-resolv-conf\") pod \"assisted-installer-controller-m2vzq\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:20.512980 master-0 kubenswrapper[3938]: I0318 13:06:20.512950 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-resolv-conf\") pod \"assisted-installer-controller-m2vzq\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:20.513067 master-0 kubenswrapper[3938]: I0318 13:06:20.512988 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-var-run-resolv-conf\") pod \"assisted-installer-controller-m2vzq\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:20.513067 master-0 kubenswrapper[3938]: I0318 13:06:20.513042 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-sno-bootstrap-files\") pod \"assisted-installer-controller-m2vzq\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:20.513141 master-0 kubenswrapper[3938]: I0318 13:06:20.513068 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-ca-bundle\") pod \"assisted-installer-controller-m2vzq\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:20.537078 master-0 kubenswrapper[3938]: I0318 13:06:20.536983 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n2sf\" (UniqueName: \"kubernetes.io/projected/c0403564-f8d9-4d81-b9e3-d9028fe58590-kube-api-access-4n2sf\") pod \"assisted-installer-controller-m2vzq\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:20.579257 master-0 kubenswrapper[3938]: I0318 13:06:20.579164 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:20.916400 master-0 kubenswrapper[3938]: I0318 13:06:20.916321 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:20.916745 master-0 kubenswrapper[3938]: E0318 13:06:20.916639 3938 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:20.916796 master-0 kubenswrapper[3938]: E0318 13:06:20.916773 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert podName:162e25c0-761c-4414-8c29-f6931afdb7b2 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:21.916744108 +0000 UTC m=+40.452490913 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert") pod "cluster-version-operator-56d8475767-l6hzm" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:21.507170 master-0 kubenswrapper[3938]: I0318 13:06:21.507052 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-m2vzq" event={"ID":"c0403564-f8d9-4d81-b9e3-d9028fe58590","Type":"ContainerStarted","Data":"ce97760530466dc4fab04d92ea3320ac86069f6a538466695591a4fec01d17ee"} Mar 18 13:06:21.930478 master-0 kubenswrapper[3938]: I0318 13:06:21.930213 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:21.930478 master-0 kubenswrapper[3938]: E0318 13:06:21.930437 3938 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:21.931015 master-0 kubenswrapper[3938]: E0318 13:06:21.930549 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert podName:162e25c0-761c-4414-8c29-f6931afdb7b2 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:23.930523599 +0000 UTC m=+42.466270404 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert") pod "cluster-version-operator-56d8475767-l6hzm" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:23.786730 master-0 kubenswrapper[3938]: I0318 13:06:23.786630 3938 csr.go:261] certificate signing request csr-bbfgx is approved, waiting to be issued Mar 18 13:06:23.803997 master-0 kubenswrapper[3938]: I0318 13:06:23.803897 3938 csr.go:257] certificate signing request csr-bbfgx is issued Mar 18 13:06:23.945520 master-0 kubenswrapper[3938]: I0318 13:06:23.945375 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:23.945520 master-0 kubenswrapper[3938]: E0318 13:06:23.945528 3938 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:23.945789 master-0 kubenswrapper[3938]: E0318 13:06:23.945591 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert podName:162e25c0-761c-4414-8c29-f6931afdb7b2 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:27.945574829 +0000 UTC m=+46.481321634 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert") pod "cluster-version-operator-56d8475767-l6hzm" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:24.110583 master-0 kubenswrapper[3938]: I0318 13:06:24.110390 3938 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 13:06:24.805598 master-0 kubenswrapper[3938]: I0318 13:06:24.805482 3938 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 12:57:27 +0000 UTC, rotation deadline is 2026-03-19 06:39:16.934184703 +0000 UTC Mar 18 13:06:24.805598 master-0 kubenswrapper[3938]: I0318 13:06:24.805526 3938 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h32m52.128663153s for next certificate rotation Mar 18 13:06:25.464021 master-0 kubenswrapper[3938]: I0318 13:06:25.463646 3938 scope.go:117] "RemoveContainer" containerID="0cd8e6f707daec133a61d96c86c0549a9d428fdbed2e6e436300d058f6e9d875" Mar 18 13:06:25.464282 master-0 kubenswrapper[3938]: E0318 13:06:25.464158 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 13:06:25.464793 master-0 kubenswrapper[3938]: I0318 13:06:25.464748 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 18 13:06:25.518302 master-0 kubenswrapper[3938]: I0318 13:06:25.518229 3938 scope.go:117] "RemoveContainer" containerID="0cd8e6f707daec133a61d96c86c0549a9d428fdbed2e6e436300d058f6e9d875" Mar 18 13:06:25.518578 master-0 kubenswrapper[3938]: E0318 13:06:25.518383 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 13:06:25.806759 master-0 kubenswrapper[3938]: I0318 13:06:25.806617 3938 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 12:57:27 +0000 UTC, rotation deadline is 2026-03-19 06:59:19.040325177 +0000 UTC Mar 18 13:06:25.806759 master-0 kubenswrapper[3938]: I0318 13:06:25.806664 3938 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h52m53.233664782s for next certificate rotation Mar 18 13:06:27.987857 master-0 kubenswrapper[3938]: I0318 13:06:27.987760 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:27.988590 master-0 kubenswrapper[3938]: E0318 13:06:27.988170 3938 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:27.988590 master-0 kubenswrapper[3938]: E0318 13:06:27.988362 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert podName:162e25c0-761c-4414-8c29-f6931afdb7b2 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:35.988322388 +0000 UTC m=+54.524069193 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert") pod "cluster-version-operator-56d8475767-l6hzm" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:28.527290 master-0 kubenswrapper[3938]: I0318 13:06:28.527220 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" event={"ID":"8a0944d2-d99a-42eb-81f5-a212b750b8f4","Type":"ContainerStarted","Data":"6b882cdda72d564225a61ad06267c4be93a7acf1cff49af344ca080e3af8cb10"} Mar 18 13:06:30.158621 master-0 kubenswrapper[3938]: I0318 13:06:30.158537 3938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" podStartSLOduration=6.613752669 podStartE2EDuration="14.158487215s" podCreationTimestamp="2026-03-18 13:06:16 +0000 UTC" firstStartedPulling="2026-03-18 13:06:20.466757956 +0000 UTC m=+39.002504761" lastFinishedPulling="2026-03-18 13:06:28.011492502 +0000 UTC m=+46.547239307" observedRunningTime="2026-03-18 13:06:30.102232356 +0000 UTC m=+48.637979181" watchObservedRunningTime="2026-03-18 13:06:30.158487215 +0000 UTC m=+48.694234020" Mar 18 13:06:30.537188 master-0 kubenswrapper[3938]: I0318 13:06:30.537059 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-m2vzq" event={"ID":"c0403564-f8d9-4d81-b9e3-d9028fe58590","Type":"ContainerStarted","Data":"6c9c61fe13233fc2963a22bc53cbe738d781d6a4794b40b0e2484f290dbd30f4"} Mar 18 13:06:31.541392 master-0 kubenswrapper[3938]: I0318 13:06:31.541334 3938 generic.go:334] "Generic (PLEG): container finished" podID="c0403564-f8d9-4d81-b9e3-d9028fe58590" containerID="6c9c61fe13233fc2963a22bc53cbe738d781d6a4794b40b0e2484f290dbd30f4" exitCode=0 Mar 18 13:06:31.542653 master-0 kubenswrapper[3938]: I0318 13:06:31.541425 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-m2vzq" event={"ID":"c0403564-f8d9-4d81-b9e3-d9028fe58590","Type":"ContainerDied","Data":"6c9c61fe13233fc2963a22bc53cbe738d781d6a4794b40b0e2484f290dbd30f4"} Mar 18 13:06:31.555347 master-0 kubenswrapper[3938]: I0318 13:06:31.555299 3938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:31.614971 master-0 kubenswrapper[3938]: I0318 13:06:31.614876 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-var-run-resolv-conf\") pod \"c0403564-f8d9-4d81-b9e3-d9028fe58590\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " Mar 18 13:06:31.614971 master-0 kubenswrapper[3938]: I0318 13:06:31.614965 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-ca-bundle\") pod \"c0403564-f8d9-4d81-b9e3-d9028fe58590\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " Mar 18 13:06:31.615362 master-0 kubenswrapper[3938]: I0318 13:06:31.615008 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n2sf\" (UniqueName: \"kubernetes.io/projected/c0403564-f8d9-4d81-b9e3-d9028fe58590-kube-api-access-4n2sf\") pod \"c0403564-f8d9-4d81-b9e3-d9028fe58590\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " Mar 18 13:06:31.615362 master-0 kubenswrapper[3938]: I0318 13:06:31.615033 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-resolv-conf\") pod \"c0403564-f8d9-4d81-b9e3-d9028fe58590\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " Mar 18 13:06:31.615362 master-0 kubenswrapper[3938]: I0318 13:06:31.615055 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-sno-bootstrap-files\") pod \"c0403564-f8d9-4d81-b9e3-d9028fe58590\" (UID: \"c0403564-f8d9-4d81-b9e3-d9028fe58590\") " Mar 18 13:06:31.615362 master-0 kubenswrapper[3938]: I0318 13:06:31.615085 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "c0403564-f8d9-4d81-b9e3-d9028fe58590" (UID: "c0403564-f8d9-4d81-b9e3-d9028fe58590"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:31.615362 master-0 kubenswrapper[3938]: I0318 13:06:31.615135 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "c0403564-f8d9-4d81-b9e3-d9028fe58590" (UID: "c0403564-f8d9-4d81-b9e3-d9028fe58590"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:31.615362 master-0 kubenswrapper[3938]: I0318 13:06:31.615103 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "c0403564-f8d9-4d81-b9e3-d9028fe58590" (UID: "c0403564-f8d9-4d81-b9e3-d9028fe58590"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:31.615362 master-0 kubenswrapper[3938]: I0318 13:06:31.615164 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "c0403564-f8d9-4d81-b9e3-d9028fe58590" (UID: "c0403564-f8d9-4d81-b9e3-d9028fe58590"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:31.615654 master-0 kubenswrapper[3938]: I0318 13:06:31.615619 3938 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:31.615704 master-0 kubenswrapper[3938]: I0318 13:06:31.615659 3938 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:31.615789 master-0 kubenswrapper[3938]: I0318 13:06:31.615763 3938 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:31.615789 master-0 kubenswrapper[3938]: I0318 13:06:31.615786 3938 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/c0403564-f8d9-4d81-b9e3-d9028fe58590-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:31.619268 master-0 kubenswrapper[3938]: I0318 13:06:31.619201 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0403564-f8d9-4d81-b9e3-d9028fe58590-kube-api-access-4n2sf" (OuterVolumeSpecName: "kube-api-access-4n2sf") pod "c0403564-f8d9-4d81-b9e3-d9028fe58590" (UID: "c0403564-f8d9-4d81-b9e3-d9028fe58590"). InnerVolumeSpecName "kube-api-access-4n2sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:06:31.716814 master-0 kubenswrapper[3938]: I0318 13:06:31.716732 3938 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n2sf\" (UniqueName: \"kubernetes.io/projected/c0403564-f8d9-4d81-b9e3-d9028fe58590-kube-api-access-4n2sf\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:32.546360 master-0 kubenswrapper[3938]: I0318 13:06:32.546312 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-m2vzq" event={"ID":"c0403564-f8d9-4d81-b9e3-d9028fe58590","Type":"ContainerDied","Data":"ce97760530466dc4fab04d92ea3320ac86069f6a538466695591a4fec01d17ee"} Mar 18 13:06:32.546360 master-0 kubenswrapper[3938]: I0318 13:06:32.546355 3938 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce97760530466dc4fab04d92ea3320ac86069f6a538466695591a4fec01d17ee" Mar 18 13:06:32.547023 master-0 kubenswrapper[3938]: I0318 13:06:32.546996 3938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:06:32.994659 master-0 kubenswrapper[3938]: I0318 13:06:32.994606 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-9f9ht"] Mar 18 13:06:32.994876 master-0 kubenswrapper[3938]: E0318 13:06:32.994699 3938 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0403564-f8d9-4d81-b9e3-d9028fe58590" containerName="assisted-installer-controller" Mar 18 13:06:32.994876 master-0 kubenswrapper[3938]: I0318 13:06:32.994711 3938 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0403564-f8d9-4d81-b9e3-d9028fe58590" containerName="assisted-installer-controller" Mar 18 13:06:32.994876 master-0 kubenswrapper[3938]: I0318 13:06:32.994732 3938 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0403564-f8d9-4d81-b9e3-d9028fe58590" containerName="assisted-installer-controller" Mar 18 13:06:32.995005 master-0 kubenswrapper[3938]: I0318 13:06:32.994885 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-9f9ht" Mar 18 13:06:33.172161 master-0 kubenswrapper[3938]: I0318 13:06:33.172096 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qws72\" (UniqueName: \"kubernetes.io/projected/10c6ab19-9232-47bd-95da-136641cc3f2d-kube-api-access-qws72\") pod \"mtu-prober-9f9ht\" (UID: \"10c6ab19-9232-47bd-95da-136641cc3f2d\") " pod="openshift-network-operator/mtu-prober-9f9ht" Mar 18 13:06:33.273103 master-0 kubenswrapper[3938]: I0318 13:06:33.272905 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qws72\" (UniqueName: \"kubernetes.io/projected/10c6ab19-9232-47bd-95da-136641cc3f2d-kube-api-access-qws72\") pod \"mtu-prober-9f9ht\" (UID: \"10c6ab19-9232-47bd-95da-136641cc3f2d\") " pod="openshift-network-operator/mtu-prober-9f9ht" Mar 18 13:06:33.355390 master-0 kubenswrapper[3938]: I0318 13:06:33.355339 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qws72\" (UniqueName: \"kubernetes.io/projected/10c6ab19-9232-47bd-95da-136641cc3f2d-kube-api-access-qws72\") pod \"mtu-prober-9f9ht\" (UID: \"10c6ab19-9232-47bd-95da-136641cc3f2d\") " pod="openshift-network-operator/mtu-prober-9f9ht" Mar 18 13:06:33.605314 master-0 kubenswrapper[3938]: I0318 13:06:33.605123 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-9f9ht" Mar 18 13:06:33.616529 master-0 kubenswrapper[3938]: W0318 13:06:33.616456 3938 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10c6ab19_9232_47bd_95da_136641cc3f2d.slice/crio-564609a7495d28c1d563a48d2887cf8c20defd72a9874fce74c4c59b19ac7bdf WatchSource:0}: Error finding container 564609a7495d28c1d563a48d2887cf8c20defd72a9874fce74c4c59b19ac7bdf: Status 404 returned error can't find the container with id 564609a7495d28c1d563a48d2887cf8c20defd72a9874fce74c4c59b19ac7bdf Mar 18 13:06:34.553994 master-0 kubenswrapper[3938]: I0318 13:06:34.553645 3938 generic.go:334] "Generic (PLEG): container finished" podID="10c6ab19-9232-47bd-95da-136641cc3f2d" containerID="e5d871ce15c246b83610b31f823caa6e0c2380ca2682febc8546add0e167eb72" exitCode=0 Mar 18 13:06:34.553994 master-0 kubenswrapper[3938]: I0318 13:06:34.553755 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-9f9ht" event={"ID":"10c6ab19-9232-47bd-95da-136641cc3f2d","Type":"ContainerDied","Data":"e5d871ce15c246b83610b31f823caa6e0c2380ca2682febc8546add0e167eb72"} Mar 18 13:06:34.553994 master-0 kubenswrapper[3938]: I0318 13:06:34.553984 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-9f9ht" event={"ID":"10c6ab19-9232-47bd-95da-136641cc3f2d","Type":"ContainerStarted","Data":"564609a7495d28c1d563a48d2887cf8c20defd72a9874fce74c4c59b19ac7bdf"} Mar 18 13:06:35.576025 master-0 kubenswrapper[3938]: I0318 13:06:35.575975 3938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-9f9ht" Mar 18 13:06:35.696983 master-0 kubenswrapper[3938]: I0318 13:06:35.696848 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qws72\" (UniqueName: \"kubernetes.io/projected/10c6ab19-9232-47bd-95da-136641cc3f2d-kube-api-access-qws72\") pod \"10c6ab19-9232-47bd-95da-136641cc3f2d\" (UID: \"10c6ab19-9232-47bd-95da-136641cc3f2d\") " Mar 18 13:06:35.700618 master-0 kubenswrapper[3938]: I0318 13:06:35.700531 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c6ab19-9232-47bd-95da-136641cc3f2d-kube-api-access-qws72" (OuterVolumeSpecName: "kube-api-access-qws72") pod "10c6ab19-9232-47bd-95da-136641cc3f2d" (UID: "10c6ab19-9232-47bd-95da-136641cc3f2d"). InnerVolumeSpecName "kube-api-access-qws72". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:06:35.797851 master-0 kubenswrapper[3938]: I0318 13:06:35.797759 3938 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qws72\" (UniqueName: \"kubernetes.io/projected/10c6ab19-9232-47bd-95da-136641cc3f2d-kube-api-access-qws72\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:36.000147 master-0 kubenswrapper[3938]: I0318 13:06:36.000066 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:36.001191 master-0 kubenswrapper[3938]: E0318 13:06:36.001130 3938 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:36.001267 master-0 kubenswrapper[3938]: E0318 13:06:36.001241 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert podName:162e25c0-761c-4414-8c29-f6931afdb7b2 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:52.00121638 +0000 UTC m=+70.536963185 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert") pod "cluster-version-operator-56d8475767-l6hzm" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:36.559214 master-0 kubenswrapper[3938]: I0318 13:06:36.559137 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-9f9ht" event={"ID":"10c6ab19-9232-47bd-95da-136641cc3f2d","Type":"ContainerDied","Data":"564609a7495d28c1d563a48d2887cf8c20defd72a9874fce74c4c59b19ac7bdf"} Mar 18 13:06:36.559214 master-0 kubenswrapper[3938]: I0318 13:06:36.559190 3938 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="564609a7495d28c1d563a48d2887cf8c20defd72a9874fce74c4c59b19ac7bdf" Mar 18 13:06:36.559493 master-0 kubenswrapper[3938]: I0318 13:06:36.559230 3938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-9f9ht" Mar 18 13:06:37.968129 master-0 kubenswrapper[3938]: I0318 13:06:37.968042 3938 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-9f9ht"] Mar 18 13:06:37.984000 master-0 kubenswrapper[3938]: I0318 13:06:37.983911 3938 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-9f9ht"] Mar 18 13:06:38.348296 master-0 kubenswrapper[3938]: I0318 13:06:38.348097 3938 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c6ab19-9232-47bd-95da-136641cc3f2d" path="/var/lib/kubelet/pods/10c6ab19-9232-47bd-95da-136641cc3f2d/volumes" Mar 18 13:06:40.342584 master-0 kubenswrapper[3938]: I0318 13:06:40.342405 3938 scope.go:117] "RemoveContainer" containerID="0cd8e6f707daec133a61d96c86c0549a9d428fdbed2e6e436300d058f6e9d875" Mar 18 13:06:40.570278 master-0 kubenswrapper[3938]: I0318 13:06:40.570250 3938 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 13:06:40.571125 master-0 kubenswrapper[3938]: I0318 13:06:40.571069 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"4bd355c34f8aa8d889ca1a40b947fb34311faee6233b1e449a1cc61917522f5b"} Mar 18 13:06:41.810659 master-0 kubenswrapper[3938]: I0318 13:06:41.810545 3938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=16.810518692 podStartE2EDuration="16.810518692s" podCreationTimestamp="2026-03-18 13:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:06:41.723789186 +0000 UTC m=+60.259536001" watchObservedRunningTime="2026-03-18 13:06:41.810518692 +0000 UTC m=+60.346265497" Mar 18 13:06:43.946508 master-0 kubenswrapper[3938]: I0318 13:06:43.946459 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9bhww"] Mar 18 13:06:43.947226 master-0 kubenswrapper[3938]: E0318 13:06:43.946532 3938 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c6ab19-9232-47bd-95da-136641cc3f2d" containerName="prober" Mar 18 13:06:43.947226 master-0 kubenswrapper[3938]: I0318 13:06:43.946544 3938 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c6ab19-9232-47bd-95da-136641cc3f2d" containerName="prober" Mar 18 13:06:43.947226 master-0 kubenswrapper[3938]: I0318 13:06:43.946565 3938 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c6ab19-9232-47bd-95da-136641cc3f2d" containerName="prober" Mar 18 13:06:43.947226 master-0 kubenswrapper[3938]: I0318 13:06:43.946717 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9bhww" Mar 18 13:06:43.948730 master-0 kubenswrapper[3938]: I0318 13:06:43.948699 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 13:06:43.949462 master-0 kubenswrapper[3938]: I0318 13:06:43.949405 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 13:06:43.950051 master-0 kubenswrapper[3938]: I0318 13:06:43.950027 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 13:06:43.950237 master-0 kubenswrapper[3938]: I0318 13:06:43.950187 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 13:06:44.055306 master-0 kubenswrapper[3938]: I0318 13:06:44.055083 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-kubelet\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.055306 master-0 kubenswrapper[3938]: I0318 13:06:44.055152 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-etc-kubernetes\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.055306 master-0 kubenswrapper[3938]: I0318 13:06:44.055181 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-k8s-cni-cncf-io\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.055306 master-0 kubenswrapper[3938]: I0318 13:06:44.055204 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-bin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.055306 master-0 kubenswrapper[3938]: I0318 13:06:44.055226 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-system-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.055306 master-0 kubenswrapper[3938]: I0318 13:06:44.055248 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-multus-certs\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.055306 master-0 kubenswrapper[3938]: I0318 13:06:44.055300 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.055306 master-0 kubenswrapper[3938]: I0318 13:06:44.055334 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-multus-daemon-config\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.056062 master-0 kubenswrapper[3938]: I0318 13:06:44.055357 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-cnibin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.056062 master-0 kubenswrapper[3938]: I0318 13:06:44.055375 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-cni-binary-copy\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.056062 master-0 kubenswrapper[3938]: I0318 13:06:44.055392 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-hostroot\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.056062 master-0 kubenswrapper[3938]: I0318 13:06:44.055406 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4lx2\" (UniqueName: \"kubernetes.io/projected/4086d06f-d50e-4632-9da7-508909429eef-kube-api-access-w4lx2\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.056062 master-0 kubenswrapper[3938]: I0318 13:06:44.055421 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-netns\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.056062 master-0 kubenswrapper[3938]: I0318 13:06:44.055450 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-conf-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.056062 master-0 kubenswrapper[3938]: I0318 13:06:44.055465 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-os-release\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.056062 master-0 kubenswrapper[3938]: I0318 13:06:44.055491 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-socket-dir-parent\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.056062 master-0 kubenswrapper[3938]: I0318 13:06:44.055505 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-multus\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.158330 master-0 kubenswrapper[3938]: I0318 13:06:44.158185 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-socket-dir-parent\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.158330 master-0 kubenswrapper[3938]: I0318 13:06:44.158231 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-multus\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.158330 master-0 kubenswrapper[3938]: I0318 13:06:44.158258 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-etc-kubernetes\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.158330 master-0 kubenswrapper[3938]: I0318 13:06:44.158276 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-kubelet\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.158330 master-0 kubenswrapper[3938]: I0318 13:06:44.158288 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-k8s-cni-cncf-io\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.158330 master-0 kubenswrapper[3938]: I0318 13:06:44.158301 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-bin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.158330 master-0 kubenswrapper[3938]: I0318 13:06:44.158316 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-system-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.158330 master-0 kubenswrapper[3938]: I0318 13:06:44.158330 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-multus-certs\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.158330 master-0 kubenswrapper[3938]: I0318 13:06:44.158343 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.158330 master-0 kubenswrapper[3938]: I0318 13:06:44.158357 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-multus-daemon-config\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.158330 master-0 kubenswrapper[3938]: I0318 13:06:44.158372 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-cni-binary-copy\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.159010 master-0 kubenswrapper[3938]: I0318 13:06:44.158387 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-cnibin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.159010 master-0 kubenswrapper[3938]: I0318 13:06:44.158403 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-netns\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.159010 master-0 kubenswrapper[3938]: I0318 13:06:44.158417 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-hostroot\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.159010 master-0 kubenswrapper[3938]: I0318 13:06:44.158430 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4lx2\" (UniqueName: \"kubernetes.io/projected/4086d06f-d50e-4632-9da7-508909429eef-kube-api-access-w4lx2\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.159508 master-0 kubenswrapper[3938]: I0318 13:06:44.158445 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-os-release\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.159577 master-0 kubenswrapper[3938]: I0318 13:06:44.159533 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-conf-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.160606 master-0 kubenswrapper[3938]: I0318 13:06:44.159599 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-multus-certs\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.160606 master-0 kubenswrapper[3938]: I0318 13:06:44.159666 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-netns\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.160606 master-0 kubenswrapper[3938]: I0318 13:06:44.159719 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-socket-dir-parent\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.160606 master-0 kubenswrapper[3938]: I0318 13:06:44.159760 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-multus\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.160606 master-0 kubenswrapper[3938]: I0318 13:06:44.159804 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-etc-kubernetes\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.160606 master-0 kubenswrapper[3938]: I0318 13:06:44.159833 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.160606 master-0 kubenswrapper[3938]: I0318 13:06:44.159850 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-kubelet\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.160606 master-0 kubenswrapper[3938]: I0318 13:06:44.159694 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-hostroot\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.160606 master-0 kubenswrapper[3938]: I0318 13:06:44.159882 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-bin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.160606 master-0 kubenswrapper[3938]: I0318 13:06:44.159917 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-system-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.160606 master-0 kubenswrapper[3938]: I0318 13:06:44.159958 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-cnibin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.160606 master-0 kubenswrapper[3938]: I0318 13:06:44.160239 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-conf-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.160606 master-0 kubenswrapper[3938]: I0318 13:06:44.160270 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-os-release\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.160606 master-0 kubenswrapper[3938]: I0318 13:06:44.160469 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-k8s-cni-cncf-io\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.162458 master-0 kubenswrapper[3938]: I0318 13:06:44.161013 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-cni-binary-copy\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.162458 master-0 kubenswrapper[3938]: I0318 13:06:44.162310 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-multus-daemon-config\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.306688 master-0 kubenswrapper[3938]: I0318 13:06:44.306514 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-xpppb"] Mar 18 13:06:44.307047 master-0 kubenswrapper[3938]: I0318 13:06:44.306993 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.308922 master-0 kubenswrapper[3938]: I0318 13:06:44.308454 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 13:06:44.309048 master-0 kubenswrapper[3938]: I0318 13:06:44.308994 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 13:06:44.360261 master-0 kubenswrapper[3938]: I0318 13:06:44.360203 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.360261 master-0 kubenswrapper[3938]: I0318 13:06:44.360268 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.360469 master-0 kubenswrapper[3938]: I0318 13:06:44.360288 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwsns\" (UniqueName: \"kubernetes.io/projected/46ae7b31-c91c-477e-a04a-a3a8541747be-kube-api-access-zwsns\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.360469 master-0 kubenswrapper[3938]: I0318 13:06:44.360308 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-system-cni-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.360469 master-0 kubenswrapper[3938]: I0318 13:06:44.360322 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-os-release\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.360469 master-0 kubenswrapper[3938]: I0318 13:06:44.360348 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-cnibin\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.360469 master-0 kubenswrapper[3938]: I0318 13:06:44.360366 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.360469 master-0 kubenswrapper[3938]: I0318 13:06:44.360388 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.367347 master-0 kubenswrapper[3938]: I0318 13:06:44.367314 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4lx2\" (UniqueName: \"kubernetes.io/projected/4086d06f-d50e-4632-9da7-508909429eef-kube-api-access-w4lx2\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.461039 master-0 kubenswrapper[3938]: I0318 13:06:44.460519 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.461039 master-0 kubenswrapper[3938]: I0318 13:06:44.460602 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.461039 master-0 kubenswrapper[3938]: I0318 13:06:44.460625 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwsns\" (UniqueName: \"kubernetes.io/projected/46ae7b31-c91c-477e-a04a-a3a8541747be-kube-api-access-zwsns\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.461039 master-0 kubenswrapper[3938]: I0318 13:06:44.460659 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-system-cni-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.461039 master-0 kubenswrapper[3938]: I0318 13:06:44.460680 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-os-release\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.461039 master-0 kubenswrapper[3938]: I0318 13:06:44.460699 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-cnibin\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.461039 master-0 kubenswrapper[3938]: I0318 13:06:44.460717 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.461039 master-0 kubenswrapper[3938]: I0318 13:06:44.460741 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.461618 master-0 kubenswrapper[3938]: I0318 13:06:44.461128 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-system-cni-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.461618 master-0 kubenswrapper[3938]: I0318 13:06:44.461210 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-cnibin\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.461618 master-0 kubenswrapper[3938]: I0318 13:06:44.461374 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.461618 master-0 kubenswrapper[3938]: I0318 13:06:44.461401 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-os-release\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.461950 master-0 kubenswrapper[3938]: I0318 13:06:44.461889 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.463320 master-0 kubenswrapper[3938]: I0318 13:06:44.462378 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.464697 master-0 kubenswrapper[3938]: I0318 13:06:44.463709 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.484957 master-0 kubenswrapper[3938]: I0318 13:06:44.484864 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwsns\" (UniqueName: \"kubernetes.io/projected/46ae7b31-c91c-477e-a04a-a3a8541747be-kube-api-access-zwsns\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.563438 master-0 kubenswrapper[3938]: I0318 13:06:44.563228 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9bhww" Mar 18 13:06:44.576857 master-0 kubenswrapper[3938]: W0318 13:06:44.576787 3938 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4086d06f_d50e_4632_9da7_508909429eef.slice/crio-c707555fde8547aced9e196e247cc976f2be6c845c60b160a99fcce91955e9be WatchSource:0}: Error finding container c707555fde8547aced9e196e247cc976f2be6c845c60b160a99fcce91955e9be: Status 404 returned error can't find the container with id c707555fde8547aced9e196e247cc976f2be6c845c60b160a99fcce91955e9be Mar 18 13:06:44.580683 master-0 kubenswrapper[3938]: I0318 13:06:44.580487 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9bhww" event={"ID":"4086d06f-d50e-4632-9da7-508909429eef","Type":"ContainerStarted","Data":"c707555fde8547aced9e196e247cc976f2be6c845c60b160a99fcce91955e9be"} Mar 18 13:06:44.630717 master-0 kubenswrapper[3938]: I0318 13:06:44.630560 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:06:44.644225 master-0 kubenswrapper[3938]: W0318 13:06:44.644182 3938 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46ae7b31_c91c_477e_a04a_a3a8541747be.slice/crio-f4b4a14ca39c077cf2f2b693e9d1fdd626838af6ef821c8cff1c28e6bf2b25ae WatchSource:0}: Error finding container f4b4a14ca39c077cf2f2b693e9d1fdd626838af6ef821c8cff1c28e6bf2b25ae: Status 404 returned error can't find the container with id f4b4a14ca39c077cf2f2b693e9d1fdd626838af6ef821c8cff1c28e6bf2b25ae Mar 18 13:06:44.903393 master-0 kubenswrapper[3938]: I0318 13:06:44.903284 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kq2j4"] Mar 18 13:06:44.903662 master-0 kubenswrapper[3938]: I0318 13:06:44.903622 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:06:44.903721 master-0 kubenswrapper[3938]: E0318 13:06:44.903692 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:06:44.963554 master-0 kubenswrapper[3938]: I0318 13:06:44.963472 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpl28\" (UniqueName: \"kubernetes.io/projected/5e691486-8540-4b79-8eed-b0fb829071db-kube-api-access-lpl28\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:06:44.963554 master-0 kubenswrapper[3938]: I0318 13:06:44.963549 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:06:45.065060 master-0 kubenswrapper[3938]: I0318 13:06:45.063780 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpl28\" (UniqueName: \"kubernetes.io/projected/5e691486-8540-4b79-8eed-b0fb829071db-kube-api-access-lpl28\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:06:45.065060 master-0 kubenswrapper[3938]: I0318 13:06:45.063867 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:06:45.065060 master-0 kubenswrapper[3938]: E0318 13:06:45.064007 3938 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:45.065060 master-0 kubenswrapper[3938]: E0318 13:06:45.064065 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs podName:5e691486-8540-4b79-8eed-b0fb829071db nodeName:}" failed. No retries permitted until 2026-03-18 13:06:45.564046495 +0000 UTC m=+64.099793300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs") pod "network-metrics-daemon-kq2j4" (UID: "5e691486-8540-4b79-8eed-b0fb829071db") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:45.129052 master-0 kubenswrapper[3938]: I0318 13:06:45.128984 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpl28\" (UniqueName: \"kubernetes.io/projected/5e691486-8540-4b79-8eed-b0fb829071db-kube-api-access-lpl28\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:06:45.570227 master-0 kubenswrapper[3938]: I0318 13:06:45.570110 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:06:45.570596 master-0 kubenswrapper[3938]: E0318 13:06:45.570281 3938 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:45.570596 master-0 kubenswrapper[3938]: E0318 13:06:45.570343 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs podName:5e691486-8540-4b79-8eed-b0fb829071db nodeName:}" failed. No retries permitted until 2026-03-18 13:06:46.570327308 +0000 UTC m=+65.106074103 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs") pod "network-metrics-daemon-kq2j4" (UID: "5e691486-8540-4b79-8eed-b0fb829071db") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:45.584863 master-0 kubenswrapper[3938]: I0318 13:06:45.584768 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpppb" event={"ID":"46ae7b31-c91c-477e-a04a-a3a8541747be","Type":"ContainerStarted","Data":"f4b4a14ca39c077cf2f2b693e9d1fdd626838af6ef821c8cff1c28e6bf2b25ae"} Mar 18 13:06:46.342589 master-0 kubenswrapper[3938]: I0318 13:06:46.342489 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:06:46.343356 master-0 kubenswrapper[3938]: E0318 13:06:46.342647 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:06:46.577489 master-0 kubenswrapper[3938]: I0318 13:06:46.577384 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:06:46.577750 master-0 kubenswrapper[3938]: E0318 13:06:46.577551 3938 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:46.577816 master-0 kubenswrapper[3938]: E0318 13:06:46.577734 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs podName:5e691486-8540-4b79-8eed-b0fb829071db nodeName:}" failed. No retries permitted until 2026-03-18 13:06:48.577682052 +0000 UTC m=+67.113428857 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs") pod "network-metrics-daemon-kq2j4" (UID: "5e691486-8540-4b79-8eed-b0fb829071db") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:48.342319 master-0 kubenswrapper[3938]: I0318 13:06:48.342247 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:06:48.343012 master-0 kubenswrapper[3938]: E0318 13:06:48.342382 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:06:48.590643 master-0 kubenswrapper[3938]: I0318 13:06:48.590573 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:06:48.590844 master-0 kubenswrapper[3938]: E0318 13:06:48.590777 3938 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:48.590883 master-0 kubenswrapper[3938]: E0318 13:06:48.590858 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs podName:5e691486-8540-4b79-8eed-b0fb829071db nodeName:}" failed. No retries permitted until 2026-03-18 13:06:52.590835707 +0000 UTC m=+71.126582512 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs") pod "network-metrics-daemon-kq2j4" (UID: "5e691486-8540-4b79-8eed-b0fb829071db") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:49.601790 master-0 kubenswrapper[3938]: I0318 13:06:49.601737 3938 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="3ae100b68292305eb4454b58c0f9a6577d27f65eaa549bd19854723db5585aee" exitCode=0 Mar 18 13:06:49.602387 master-0 kubenswrapper[3938]: I0318 13:06:49.601810 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpppb" event={"ID":"46ae7b31-c91c-477e-a04a-a3a8541747be","Type":"ContainerDied","Data":"3ae100b68292305eb4454b58c0f9a6577d27f65eaa549bd19854723db5585aee"} Mar 18 13:06:50.342246 master-0 kubenswrapper[3938]: I0318 13:06:50.342178 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:06:50.342501 master-0 kubenswrapper[3938]: E0318 13:06:50.342334 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:06:52.020611 master-0 kubenswrapper[3938]: I0318 13:06:52.020569 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:06:52.021152 master-0 kubenswrapper[3938]: E0318 13:06:52.020716 3938 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:52.021152 master-0 kubenswrapper[3938]: E0318 13:06:52.020782 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert podName:162e25c0-761c-4414-8c29-f6931afdb7b2 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:24.020761193 +0000 UTC m=+102.556507998 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert") pod "cluster-version-operator-56d8475767-l6hzm" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:52.344831 master-0 kubenswrapper[3938]: I0318 13:06:52.344713 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:06:52.344831 master-0 kubenswrapper[3938]: E0318 13:06:52.344825 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:06:52.625711 master-0 kubenswrapper[3938]: I0318 13:06:52.625589 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:06:52.625891 master-0 kubenswrapper[3938]: E0318 13:06:52.625762 3938 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:52.625891 master-0 kubenswrapper[3938]: E0318 13:06:52.625871 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs podName:5e691486-8540-4b79-8eed-b0fb829071db nodeName:}" failed. No retries permitted until 2026-03-18 13:07:00.625852624 +0000 UTC m=+79.161599439 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs") pod "network-metrics-daemon-kq2j4" (UID: "5e691486-8540-4b79-8eed-b0fb829071db") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:54.341440 master-0 kubenswrapper[3938]: I0318 13:06:54.341354 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:06:54.341985 master-0 kubenswrapper[3938]: E0318 13:06:54.341494 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:06:56.342262 master-0 kubenswrapper[3938]: I0318 13:06:56.342082 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:06:56.342262 master-0 kubenswrapper[3938]: E0318 13:06:56.342181 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:06:58.341964 master-0 kubenswrapper[3938]: I0318 13:06:58.341841 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:06:58.342529 master-0 kubenswrapper[3938]: E0318 13:06:58.342020 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:06:58.806661 master-0 kubenswrapper[3938]: W0318 13:06:58.806610 3938 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 18 13:06:58.809981 master-0 kubenswrapper[3938]: I0318 13:06:58.807750 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 13:06:58.830812 master-0 kubenswrapper[3938]: I0318 13:06:58.830730 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42"] Mar 18 13:06:58.831107 master-0 kubenswrapper[3938]: I0318 13:06:58.831079 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:06:58.833232 master-0 kubenswrapper[3938]: I0318 13:06:58.833199 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 13:06:58.833769 master-0 kubenswrapper[3938]: I0318 13:06:58.833734 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 13:06:58.833918 master-0 kubenswrapper[3938]: I0318 13:06:58.833897 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 13:06:58.834319 master-0 kubenswrapper[3938]: I0318 13:06:58.834068 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 13:06:58.834319 master-0 kubenswrapper[3938]: I0318 13:06:58.834192 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 13:06:58.856904 master-0 kubenswrapper[3938]: I0318 13:06:58.856866 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c5rpz"] Mar 18 13:06:58.857964 master-0 kubenswrapper[3938]: I0318 13:06:58.857919 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.861693 master-0 kubenswrapper[3938]: I0318 13:06:58.861667 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 13:06:58.862587 master-0 kubenswrapper[3938]: I0318 13:06:58.862557 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 13:06:58.878404 master-0 kubenswrapper[3938]: I0318 13:06:58.878351 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-cni-bin\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.878404 master-0 kubenswrapper[3938]: I0318 13:06:58.878394 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-cni-netd\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.878404 master-0 kubenswrapper[3938]: I0318 13:06:58.878415 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-systemd-units\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.878672 master-0 kubenswrapper[3938]: I0318 13:06:58.878432 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-run-ovn-kubernetes\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.878672 master-0 kubenswrapper[3938]: I0318 13:06:58.878452 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:06:58.878672 master-0 kubenswrapper[3938]: I0318 13:06:58.878470 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-run-netns\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.878672 master-0 kubenswrapper[3938]: I0318 13:06:58.878484 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-openvswitch\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.878672 master-0 kubenswrapper[3938]: I0318 13:06:58.878499 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-env-overrides\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.878672 master-0 kubenswrapper[3938]: I0318 13:06:58.878516 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.878672 master-0 kubenswrapper[3938]: I0318 13:06:58.878537 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbltn\" (UniqueName: \"kubernetes.io/projected/c905890a-38c9-4bed-a35c-f28fd3f6065b-kube-api-access-cbltn\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.878672 master-0 kubenswrapper[3938]: I0318 13:06:58.878551 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-var-lib-openvswitch\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.878672 master-0 kubenswrapper[3938]: I0318 13:06:58.878566 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-log-socket\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.878672 master-0 kubenswrapper[3938]: I0318 13:06:58.878593 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:06:58.878672 master-0 kubenswrapper[3938]: I0318 13:06:58.878614 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brvlj\" (UniqueName: \"kubernetes.io/projected/4bc77989-ecfc-4500-92a0-18c2b3b78408-kube-api-access-brvlj\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:06:58.878672 master-0 kubenswrapper[3938]: I0318 13:06:58.878630 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-kubelet\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.878672 master-0 kubenswrapper[3938]: I0318 13:06:58.878647 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-env-overrides\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:06:58.878672 master-0 kubenswrapper[3938]: I0318 13:06:58.878661 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovnkube-config\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.878672 master-0 kubenswrapper[3938]: I0318 13:06:58.878678 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovn-node-metrics-cert\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.879100 master-0 kubenswrapper[3938]: I0318 13:06:58.878696 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-slash\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.879100 master-0 kubenswrapper[3938]: I0318 13:06:58.878709 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-systemd\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.879100 master-0 kubenswrapper[3938]: I0318 13:06:58.878723 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-ovn\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.879100 master-0 kubenswrapper[3938]: I0318 13:06:58.878736 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-node-log\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.879100 master-0 kubenswrapper[3938]: I0318 13:06:58.878751 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovnkube-script-lib\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.879100 master-0 kubenswrapper[3938]: I0318 13:06:58.878769 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-etc-openvswitch\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.929626 master-0 kubenswrapper[3938]: I0318 13:06:58.929465 3938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=0.929448317 podStartE2EDuration="929.448317ms" podCreationTimestamp="2026-03-18 13:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:06:58.904219931 +0000 UTC m=+77.439966766" watchObservedRunningTime="2026-03-18 13:06:58.929448317 +0000 UTC m=+77.465195122" Mar 18 13:06:58.979406 master-0 kubenswrapper[3938]: I0318 13:06:58.979342 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-log-socket\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.979406 master-0 kubenswrapper[3938]: I0318 13:06:58.979414 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-log-socket\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.979738 master-0 kubenswrapper[3938]: I0318 13:06:58.979449 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:06:58.979738 master-0 kubenswrapper[3938]: I0318 13:06:58.979649 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brvlj\" (UniqueName: \"kubernetes.io/projected/4bc77989-ecfc-4500-92a0-18c2b3b78408-kube-api-access-brvlj\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:06:58.979895 master-0 kubenswrapper[3938]: I0318 13:06:58.979848 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-kubelet\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.979995 master-0 kubenswrapper[3938]: I0318 13:06:58.979913 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-env-overrides\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:06:58.979995 master-0 kubenswrapper[3938]: I0318 13:06:58.979951 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovnkube-config\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.979995 master-0 kubenswrapper[3938]: I0318 13:06:58.979954 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-kubelet\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.979995 master-0 kubenswrapper[3938]: I0318 13:06:58.979971 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovn-node-metrics-cert\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.981595 master-0 kubenswrapper[3938]: I0318 13:06:58.980273 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-slash\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.981595 master-0 kubenswrapper[3938]: I0318 13:06:58.980308 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-systemd\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.981595 master-0 kubenswrapper[3938]: I0318 13:06:58.980343 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-ovn\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.981595 master-0 kubenswrapper[3938]: I0318 13:06:58.980363 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-node-log\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.981595 master-0 kubenswrapper[3938]: I0318 13:06:58.980386 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovnkube-script-lib\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.981595 master-0 kubenswrapper[3938]: I0318 13:06:58.980410 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-etc-openvswitch\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.981595 master-0 kubenswrapper[3938]: I0318 13:06:58.980440 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-cni-bin\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.981595 master-0 kubenswrapper[3938]: I0318 13:06:58.980452 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-systemd\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.981595 master-0 kubenswrapper[3938]: I0318 13:06:58.980459 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-cni-netd\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.981595 master-0 kubenswrapper[3938]: I0318 13:06:58.980492 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-cni-netd\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.981595 master-0 kubenswrapper[3938]: I0318 13:06:58.980512 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-systemd-units\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.981595 master-0 kubenswrapper[3938]: I0318 13:06:58.980539 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-ovn\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.981595 master-0 kubenswrapper[3938]: I0318 13:06:58.980565 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-run-ovn-kubernetes\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.981595 master-0 kubenswrapper[3938]: I0318 13:06:58.980590 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:06:58.981595 master-0 kubenswrapper[3938]: I0318 13:06:58.980613 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-run-netns\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.981595 master-0 kubenswrapper[3938]: I0318 13:06:58.980635 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-openvswitch\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.981595 master-0 kubenswrapper[3938]: I0318 13:06:58.980656 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-env-overrides\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.982217 master-0 kubenswrapper[3938]: I0318 13:06:58.980678 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.982217 master-0 kubenswrapper[3938]: I0318 13:06:58.980702 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbltn\" (UniqueName: \"kubernetes.io/projected/c905890a-38c9-4bed-a35c-f28fd3f6065b-kube-api-access-cbltn\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.982217 master-0 kubenswrapper[3938]: I0318 13:06:58.980724 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-var-lib-openvswitch\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.982217 master-0 kubenswrapper[3938]: I0318 13:06:58.980774 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-var-lib-openvswitch\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.982217 master-0 kubenswrapper[3938]: I0318 13:06:58.980806 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-node-log\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.982217 master-0 kubenswrapper[3938]: I0318 13:06:58.980868 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-env-overrides\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:06:58.982217 master-0 kubenswrapper[3938]: I0318 13:06:58.980921 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-systemd-units\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.982217 master-0 kubenswrapper[3938]: I0318 13:06:58.980970 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-slash\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.982217 master-0 kubenswrapper[3938]: I0318 13:06:58.981008 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-run-netns\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.982217 master-0 kubenswrapper[3938]: I0318 13:06:58.981041 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-etc-openvswitch\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.982217 master-0 kubenswrapper[3938]: I0318 13:06:58.981067 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-cni-bin\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.982217 master-0 kubenswrapper[3938]: I0318 13:06:58.981087 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovnkube-config\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.982217 master-0 kubenswrapper[3938]: I0318 13:06:58.981095 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-run-ovn-kubernetes\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.982217 master-0 kubenswrapper[3938]: I0318 13:06:58.981428 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovnkube-script-lib\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.982217 master-0 kubenswrapper[3938]: I0318 13:06:58.981480 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.982217 master-0 kubenswrapper[3938]: I0318 13:06:58.981498 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:06:58.982217 master-0 kubenswrapper[3938]: I0318 13:06:58.981559 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-openvswitch\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.982780 master-0 kubenswrapper[3938]: I0318 13:06:58.981530 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-env-overrides\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.983599 master-0 kubenswrapper[3938]: I0318 13:06:58.983570 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovn-node-metrics-cert\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:58.985439 master-0 kubenswrapper[3938]: I0318 13:06:58.985401 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:06:58.998874 master-0 kubenswrapper[3938]: I0318 13:06:58.998572 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brvlj\" (UniqueName: \"kubernetes.io/projected/4bc77989-ecfc-4500-92a0-18c2b3b78408-kube-api-access-brvlj\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:06:58.998874 master-0 kubenswrapper[3938]: I0318 13:06:58.998827 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbltn\" (UniqueName: \"kubernetes.io/projected/c905890a-38c9-4bed-a35c-f28fd3f6065b-kube-api-access-cbltn\") pod \"ovnkube-node-c5rpz\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:59.153675 master-0 kubenswrapper[3938]: I0318 13:06:59.153550 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:06:59.171547 master-0 kubenswrapper[3938]: I0318 13:06:59.171152 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:06:59.815806 master-0 kubenswrapper[3938]: I0318 13:06:59.815753 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-zlgkc"] Mar 18 13:06:59.816852 master-0 kubenswrapper[3938]: I0318 13:06:59.816746 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:06:59.816852 master-0 kubenswrapper[3938]: E0318 13:06:59.816803 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:06:59.889977 master-0 kubenswrapper[3938]: I0318 13:06:59.889894 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv9m7\" (UniqueName: \"kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7\") pod \"network-check-target-zlgkc\" (UID: \"2cad2401-dab1-49f7-870e-a742ebfe323f\") " pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:06:59.991211 master-0 kubenswrapper[3938]: I0318 13:06:59.991148 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv9m7\" (UniqueName: \"kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7\") pod \"network-check-target-zlgkc\" (UID: \"2cad2401-dab1-49f7-870e-a742ebfe323f\") " pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:00.335026 master-0 kubenswrapper[3938]: E0318 13:07:00.334988 3938 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 13:07:00.335026 master-0 kubenswrapper[3938]: E0318 13:07:00.335016 3938 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 13:07:00.335026 master-0 kubenswrapper[3938]: E0318 13:07:00.335030 3938 projected.go:194] Error preparing data for projected volume kube-api-access-rv9m7 for pod openshift-network-diagnostics/network-check-target-zlgkc: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:07:00.335495 master-0 kubenswrapper[3938]: E0318 13:07:00.335089 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7 podName:2cad2401-dab1-49f7-870e-a742ebfe323f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:00.835070073 +0000 UTC m=+79.370816878 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rv9m7" (UniqueName: "kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7") pod "network-check-target-zlgkc" (UID: "2cad2401-dab1-49f7-870e-a742ebfe323f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:07:00.341564 master-0 kubenswrapper[3938]: I0318 13:07:00.341509 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:00.341678 master-0 kubenswrapper[3938]: E0318 13:07:00.341635 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:00.698554 master-0 kubenswrapper[3938]: I0318 13:07:00.698462 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:00.698782 master-0 kubenswrapper[3938]: E0318 13:07:00.698642 3938 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:07:00.698782 master-0 kubenswrapper[3938]: E0318 13:07:00.698724 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs podName:5e691486-8540-4b79-8eed-b0fb829071db nodeName:}" failed. No retries permitted until 2026-03-18 13:07:16.698709192 +0000 UTC m=+95.234455997 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs") pod "network-metrics-daemon-kq2j4" (UID: "5e691486-8540-4b79-8eed-b0fb829071db") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:07:00.900338 master-0 kubenswrapper[3938]: I0318 13:07:00.900233 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv9m7\" (UniqueName: \"kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7\") pod \"network-check-target-zlgkc\" (UID: \"2cad2401-dab1-49f7-870e-a742ebfe323f\") " pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:00.900903 master-0 kubenswrapper[3938]: E0318 13:07:00.900470 3938 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 13:07:00.900903 master-0 kubenswrapper[3938]: E0318 13:07:00.900515 3938 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 13:07:00.900903 master-0 kubenswrapper[3938]: E0318 13:07:00.900537 3938 projected.go:194] Error preparing data for projected volume kube-api-access-rv9m7 for pod openshift-network-diagnostics/network-check-target-zlgkc: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:07:00.900903 master-0 kubenswrapper[3938]: E0318 13:07:00.900624 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7 podName:2cad2401-dab1-49f7-870e-a742ebfe323f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:01.900600041 +0000 UTC m=+80.436346876 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rv9m7" (UniqueName: "kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7") pod "network-check-target-zlgkc" (UID: "2cad2401-dab1-49f7-870e-a742ebfe323f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:07:01.341522 master-0 kubenswrapper[3938]: I0318 13:07:01.341432 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:01.341776 master-0 kubenswrapper[3938]: E0318 13:07:01.341570 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:01.351331 master-0 kubenswrapper[3938]: W0318 13:07:01.351247 3938 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bc77989_ecfc_4500_92a0_18c2b3b78408.slice/crio-ab383e41a32fa393eda648ad2e2329488744a0ae30fb174d7a553710f1fa274a WatchSource:0}: Error finding container ab383e41a32fa393eda648ad2e2329488744a0ae30fb174d7a553710f1fa274a: Status 404 returned error can't find the container with id ab383e41a32fa393eda648ad2e2329488744a0ae30fb174d7a553710f1fa274a Mar 18 13:07:01.358419 master-0 kubenswrapper[3938]: W0318 13:07:01.358376 3938 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc905890a_38c9_4bed_a35c_f28fd3f6065b.slice/crio-189afff678131b27ad291ee0fef6532b57fa0f072d8ef23f7e018e4def02d53e WatchSource:0}: Error finding container 189afff678131b27ad291ee0fef6532b57fa0f072d8ef23f7e018e4def02d53e: Status 404 returned error can't find the container with id 189afff678131b27ad291ee0fef6532b57fa0f072d8ef23f7e018e4def02d53e Mar 18 13:07:01.627069 master-0 kubenswrapper[3938]: I0318 13:07:01.626926 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" event={"ID":"4bc77989-ecfc-4500-92a0-18c2b3b78408","Type":"ContainerStarted","Data":"ab383e41a32fa393eda648ad2e2329488744a0ae30fb174d7a553710f1fa274a"} Mar 18 13:07:01.627684 master-0 kubenswrapper[3938]: I0318 13:07:01.627665 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" event={"ID":"c905890a-38c9-4bed-a35c-f28fd3f6065b","Type":"ContainerStarted","Data":"189afff678131b27ad291ee0fef6532b57fa0f072d8ef23f7e018e4def02d53e"} Mar 18 13:07:01.909395 master-0 kubenswrapper[3938]: I0318 13:07:01.909057 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv9m7\" (UniqueName: \"kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7\") pod \"network-check-target-zlgkc\" (UID: \"2cad2401-dab1-49f7-870e-a742ebfe323f\") " pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:01.909395 master-0 kubenswrapper[3938]: E0318 13:07:01.909227 3938 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 13:07:01.909395 master-0 kubenswrapper[3938]: E0318 13:07:01.909347 3938 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 13:07:01.909395 master-0 kubenswrapper[3938]: E0318 13:07:01.909360 3938 projected.go:194] Error preparing data for projected volume kube-api-access-rv9m7 for pod openshift-network-diagnostics/network-check-target-zlgkc: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:07:01.909395 master-0 kubenswrapper[3938]: E0318 13:07:01.909409 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7 podName:2cad2401-dab1-49f7-870e-a742ebfe323f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:03.909393227 +0000 UTC m=+82.445140032 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rv9m7" (UniqueName: "kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7") pod "network-check-target-zlgkc" (UID: "2cad2401-dab1-49f7-870e-a742ebfe323f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:07:02.310122 master-0 kubenswrapper[3938]: I0318 13:07:02.309992 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-xcbtb"] Mar 18 13:07:02.310440 master-0 kubenswrapper[3938]: I0318 13:07:02.310418 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:07:02.315178 master-0 kubenswrapper[3938]: I0318 13:07:02.315126 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 13:07:02.317362 master-0 kubenswrapper[3938]: I0318 13:07:02.315284 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 13:07:02.317362 master-0 kubenswrapper[3938]: I0318 13:07:02.315530 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 13:07:02.317362 master-0 kubenswrapper[3938]: I0318 13:07:02.315624 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 13:07:02.317629 master-0 kubenswrapper[3938]: I0318 13:07:02.315156 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 13:07:02.342264 master-0 kubenswrapper[3938]: I0318 13:07:02.342007 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:02.342846 master-0 kubenswrapper[3938]: E0318 13:07:02.342810 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:02.416533 master-0 kubenswrapper[3938]: I0318 13:07:02.416487 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k254v\" (UniqueName: \"kubernetes.io/projected/eb8907fd-35dd-452a-8032-f2f95a6e553a-kube-api-access-k254v\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:07:02.416747 master-0 kubenswrapper[3938]: I0318 13:07:02.416565 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-env-overrides\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:07:02.416747 master-0 kubenswrapper[3938]: I0318 13:07:02.416610 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-ovnkube-identity-cm\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:07:02.416747 master-0 kubenswrapper[3938]: I0318 13:07:02.416654 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb8907fd-35dd-452a-8032-f2f95a6e553a-webhook-cert\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:07:02.517840 master-0 kubenswrapper[3938]: I0318 13:07:02.517785 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb8907fd-35dd-452a-8032-f2f95a6e553a-webhook-cert\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:07:02.518052 master-0 kubenswrapper[3938]: I0318 13:07:02.517859 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k254v\" (UniqueName: \"kubernetes.io/projected/eb8907fd-35dd-452a-8032-f2f95a6e553a-kube-api-access-k254v\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:07:02.518052 master-0 kubenswrapper[3938]: I0318 13:07:02.517880 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-env-overrides\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:07:02.518052 master-0 kubenswrapper[3938]: I0318 13:07:02.517912 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-ovnkube-identity-cm\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:07:02.519044 master-0 kubenswrapper[3938]: I0318 13:07:02.519020 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-ovnkube-identity-cm\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:07:02.519133 master-0 kubenswrapper[3938]: E0318 13:07:02.519115 3938 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Mar 18 13:07:02.519171 master-0 kubenswrapper[3938]: E0318 13:07:02.519160 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb8907fd-35dd-452a-8032-f2f95a6e553a-webhook-cert podName:eb8907fd-35dd-452a-8032-f2f95a6e553a nodeName:}" failed. No retries permitted until 2026-03-18 13:07:03.01914753 +0000 UTC m=+81.554894335 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/eb8907fd-35dd-452a-8032-f2f95a6e553a-webhook-cert") pod "network-node-identity-xcbtb" (UID: "eb8907fd-35dd-452a-8032-f2f95a6e553a") : secret "network-node-identity-cert" not found Mar 18 13:07:02.521741 master-0 kubenswrapper[3938]: I0318 13:07:02.521716 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-env-overrides\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:07:02.539174 master-0 kubenswrapper[3938]: I0318 13:07:02.539136 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k254v\" (UniqueName: \"kubernetes.io/projected/eb8907fd-35dd-452a-8032-f2f95a6e553a-kube-api-access-k254v\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:07:02.646528 master-0 kubenswrapper[3938]: I0318 13:07:02.646425 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9bhww" event={"ID":"4086d06f-d50e-4632-9da7-508909429eef","Type":"ContainerStarted","Data":"5394d664276b732405c2f36d93a1c6684a2d3f81f6fff5e5dd89ce8fad35d0cd"} Mar 18 13:07:02.648089 master-0 kubenswrapper[3938]: I0318 13:07:02.648063 3938 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="9b8b0976c817ccd695886d1ba83ffcc31d11cd506356512ccbdf4d71a9024f68" exitCode=0 Mar 18 13:07:02.648168 master-0 kubenswrapper[3938]: I0318 13:07:02.648089 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpppb" event={"ID":"46ae7b31-c91c-477e-a04a-a3a8541747be","Type":"ContainerDied","Data":"9b8b0976c817ccd695886d1ba83ffcc31d11cd506356512ccbdf4d71a9024f68"} Mar 18 13:07:02.649687 master-0 kubenswrapper[3938]: I0318 13:07:02.649233 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" event={"ID":"4bc77989-ecfc-4500-92a0-18c2b3b78408","Type":"ContainerStarted","Data":"8928c372e7c96b99c3f584a5cb63de5798b7e44fb2cc782b25ced63c9daed0e8"} Mar 18 13:07:02.689420 master-0 kubenswrapper[3938]: I0318 13:07:02.689335 3938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9bhww" podStartSLOduration=2.871931039 podStartE2EDuration="19.689317979s" podCreationTimestamp="2026-03-18 13:06:43 +0000 UTC" firstStartedPulling="2026-03-18 13:06:44.579274078 +0000 UTC m=+63.115020883" lastFinishedPulling="2026-03-18 13:07:01.396661018 +0000 UTC m=+79.932407823" observedRunningTime="2026-03-18 13:07:02.689279198 +0000 UTC m=+81.225026013" watchObservedRunningTime="2026-03-18 13:07:02.689317979 +0000 UTC m=+81.225064784" Mar 18 13:07:03.037623 master-0 kubenswrapper[3938]: I0318 13:07:03.036992 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb8907fd-35dd-452a-8032-f2f95a6e553a-webhook-cert\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:07:03.048360 master-0 kubenswrapper[3938]: I0318 13:07:03.045696 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb8907fd-35dd-452a-8032-f2f95a6e553a-webhook-cert\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:07:03.224321 master-0 kubenswrapper[3938]: I0318 13:07:03.224280 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:07:03.342312 master-0 kubenswrapper[3938]: I0318 13:07:03.342189 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:03.342501 master-0 kubenswrapper[3938]: E0318 13:07:03.342317 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:03.945270 master-0 kubenswrapper[3938]: I0318 13:07:03.945216 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv9m7\" (UniqueName: \"kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7\") pod \"network-check-target-zlgkc\" (UID: \"2cad2401-dab1-49f7-870e-a742ebfe323f\") " pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:03.945576 master-0 kubenswrapper[3938]: E0318 13:07:03.945383 3938 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 13:07:03.945576 master-0 kubenswrapper[3938]: E0318 13:07:03.945405 3938 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 13:07:03.945576 master-0 kubenswrapper[3938]: E0318 13:07:03.945415 3938 projected.go:194] Error preparing data for projected volume kube-api-access-rv9m7 for pod openshift-network-diagnostics/network-check-target-zlgkc: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:07:03.945576 master-0 kubenswrapper[3938]: E0318 13:07:03.945485 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7 podName:2cad2401-dab1-49f7-870e-a742ebfe323f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:07.945465094 +0000 UTC m=+86.481211899 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rv9m7" (UniqueName: "kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7") pod "network-check-target-zlgkc" (UID: "2cad2401-dab1-49f7-870e-a742ebfe323f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:07:04.153601 master-0 kubenswrapper[3938]: W0318 13:07:04.153565 3938 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb8907fd_35dd_452a_8032_f2f95a6e553a.slice/crio-1d24886475858f5204b2d9c7e7c2ee25187c07d787043c501faeeb9daa42c75a WatchSource:0}: Error finding container 1d24886475858f5204b2d9c7e7c2ee25187c07d787043c501faeeb9daa42c75a: Status 404 returned error can't find the container with id 1d24886475858f5204b2d9c7e7c2ee25187c07d787043c501faeeb9daa42c75a Mar 18 13:07:04.341550 master-0 kubenswrapper[3938]: I0318 13:07:04.341443 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:04.341720 master-0 kubenswrapper[3938]: E0318 13:07:04.341564 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:04.654266 master-0 kubenswrapper[3938]: I0318 13:07:04.654166 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-xcbtb" event={"ID":"eb8907fd-35dd-452a-8032-f2f95a6e553a","Type":"ContainerStarted","Data":"1d24886475858f5204b2d9c7e7c2ee25187c07d787043c501faeeb9daa42c75a"} Mar 18 13:07:05.342460 master-0 kubenswrapper[3938]: I0318 13:07:05.342060 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:05.342460 master-0 kubenswrapper[3938]: E0318 13:07:05.342439 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:06.342364 master-0 kubenswrapper[3938]: I0318 13:07:06.342276 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:06.342822 master-0 kubenswrapper[3938]: E0318 13:07:06.342782 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:07.357224 master-0 kubenswrapper[3938]: I0318 13:07:07.357167 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:07.357703 master-0 kubenswrapper[3938]: E0318 13:07:07.357297 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:07.663744 master-0 kubenswrapper[3938]: I0318 13:07:07.663619 3938 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="158c0af92fac11481577106174b03b386a7b412c2e448451da762deb74b713bd" exitCode=0 Mar 18 13:07:07.663744 master-0 kubenswrapper[3938]: I0318 13:07:07.663662 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpppb" event={"ID":"46ae7b31-c91c-477e-a04a-a3a8541747be","Type":"ContainerDied","Data":"158c0af92fac11481577106174b03b386a7b412c2e448451da762deb74b713bd"} Mar 18 13:07:08.007899 master-0 kubenswrapper[3938]: I0318 13:07:08.007748 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv9m7\" (UniqueName: \"kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7\") pod \"network-check-target-zlgkc\" (UID: \"2cad2401-dab1-49f7-870e-a742ebfe323f\") " pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:08.008424 master-0 kubenswrapper[3938]: E0318 13:07:08.008170 3938 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 13:07:08.008424 master-0 kubenswrapper[3938]: E0318 13:07:08.008209 3938 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 13:07:08.008424 master-0 kubenswrapper[3938]: E0318 13:07:08.008221 3938 projected.go:194] Error preparing data for projected volume kube-api-access-rv9m7 for pod openshift-network-diagnostics/network-check-target-zlgkc: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:07:08.008424 master-0 kubenswrapper[3938]: E0318 13:07:08.008285 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7 podName:2cad2401-dab1-49f7-870e-a742ebfe323f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:16.008268123 +0000 UTC m=+94.544014928 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rv9m7" (UniqueName: "kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7") pod "network-check-target-zlgkc" (UID: "2cad2401-dab1-49f7-870e-a742ebfe323f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:07:08.342264 master-0 kubenswrapper[3938]: I0318 13:07:08.342145 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:08.342421 master-0 kubenswrapper[3938]: E0318 13:07:08.342280 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:09.342247 master-0 kubenswrapper[3938]: I0318 13:07:09.341994 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:09.342247 master-0 kubenswrapper[3938]: E0318 13:07:09.342220 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:10.342246 master-0 kubenswrapper[3938]: I0318 13:07:10.342192 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:10.342468 master-0 kubenswrapper[3938]: E0318 13:07:10.342317 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:10.674484 master-0 kubenswrapper[3938]: I0318 13:07:10.674298 3938 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="5308c4990ee617dab17b794620acded12b71b96d5a2e7a368924488be2073775" exitCode=0 Mar 18 13:07:10.674484 master-0 kubenswrapper[3938]: I0318 13:07:10.674346 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpppb" event={"ID":"46ae7b31-c91c-477e-a04a-a3a8541747be","Type":"ContainerDied","Data":"5308c4990ee617dab17b794620acded12b71b96d5a2e7a368924488be2073775"} Mar 18 13:07:11.342054 master-0 kubenswrapper[3938]: I0318 13:07:11.342014 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:11.342344 master-0 kubenswrapper[3938]: E0318 13:07:11.342131 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:12.341981 master-0 kubenswrapper[3938]: I0318 13:07:12.341916 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:12.343045 master-0 kubenswrapper[3938]: E0318 13:07:12.343001 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:12.357443 master-0 kubenswrapper[3938]: I0318 13:07:12.357389 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 13:07:13.360758 master-0 kubenswrapper[3938]: I0318 13:07:13.360645 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:13.361307 master-0 kubenswrapper[3938]: E0318 13:07:13.360796 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:14.342161 master-0 kubenswrapper[3938]: I0318 13:07:14.342091 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:14.342431 master-0 kubenswrapper[3938]: E0318 13:07:14.342322 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:15.341401 master-0 kubenswrapper[3938]: I0318 13:07:15.341330 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:15.342024 master-0 kubenswrapper[3938]: E0318 13:07:15.341570 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:16.039662 master-0 kubenswrapper[3938]: I0318 13:07:16.039557 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv9m7\" (UniqueName: \"kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7\") pod \"network-check-target-zlgkc\" (UID: \"2cad2401-dab1-49f7-870e-a742ebfe323f\") " pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:16.039903 master-0 kubenswrapper[3938]: E0318 13:07:16.039696 3938 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 13:07:16.039903 master-0 kubenswrapper[3938]: E0318 13:07:16.039711 3938 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 13:07:16.039903 master-0 kubenswrapper[3938]: E0318 13:07:16.039721 3938 projected.go:194] Error preparing data for projected volume kube-api-access-rv9m7 for pod openshift-network-diagnostics/network-check-target-zlgkc: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:07:16.039903 master-0 kubenswrapper[3938]: E0318 13:07:16.039769 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7 podName:2cad2401-dab1-49f7-870e-a742ebfe323f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:32.039756948 +0000 UTC m=+110.575503743 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-rv9m7" (UniqueName: "kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7") pod "network-check-target-zlgkc" (UID: "2cad2401-dab1-49f7-870e-a742ebfe323f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:07:16.342130 master-0 kubenswrapper[3938]: I0318 13:07:16.341986 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:16.342130 master-0 kubenswrapper[3938]: E0318 13:07:16.342124 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:16.745344 master-0 kubenswrapper[3938]: I0318 13:07:16.745279 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:16.745554 master-0 kubenswrapper[3938]: E0318 13:07:16.745411 3938 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:07:16.745554 master-0 kubenswrapper[3938]: E0318 13:07:16.745461 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs podName:5e691486-8540-4b79-8eed-b0fb829071db nodeName:}" failed. No retries permitted until 2026-03-18 13:07:48.745444823 +0000 UTC m=+127.281191628 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs") pod "network-metrics-daemon-kq2j4" (UID: "5e691486-8540-4b79-8eed-b0fb829071db") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:07:17.341703 master-0 kubenswrapper[3938]: I0318 13:07:17.341643 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:17.341918 master-0 kubenswrapper[3938]: E0318 13:07:17.341771 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:18.342204 master-0 kubenswrapper[3938]: I0318 13:07:18.342150 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:18.342747 master-0 kubenswrapper[3938]: E0318 13:07:18.342288 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:19.342111 master-0 kubenswrapper[3938]: I0318 13:07:19.342058 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:19.342345 master-0 kubenswrapper[3938]: E0318 13:07:19.342200 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:20.344726 master-0 kubenswrapper[3938]: I0318 13:07:20.344669 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:20.345265 master-0 kubenswrapper[3938]: E0318 13:07:20.344791 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:21.116496 master-0 kubenswrapper[3938]: I0318 13:07:21.116437 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 13:07:21.342362 master-0 kubenswrapper[3938]: I0318 13:07:21.342284 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:21.342668 master-0 kubenswrapper[3938]: E0318 13:07:21.342430 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:22.342225 master-0 kubenswrapper[3938]: I0318 13:07:22.342175 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:22.343007 master-0 kubenswrapper[3938]: E0318 13:07:22.342977 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:23.083038 master-0 kubenswrapper[3938]: I0318 13:07:23.082910 3938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=11.082888898 podStartE2EDuration="11.082888898s" podCreationTimestamp="2026-03-18 13:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:23.082641561 +0000 UTC m=+101.618388386" watchObservedRunningTime="2026-03-18 13:07:23.082888898 +0000 UTC m=+101.618635743" Mar 18 13:07:23.342502 master-0 kubenswrapper[3938]: I0318 13:07:23.342346 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:23.342502 master-0 kubenswrapper[3938]: E0318 13:07:23.342474 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:24.070212 master-0 kubenswrapper[3938]: I0318 13:07:24.070136 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:07:24.070575 master-0 kubenswrapper[3938]: E0318 13:07:24.070270 3938 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:07:24.070575 master-0 kubenswrapper[3938]: E0318 13:07:24.070332 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert podName:162e25c0-761c-4414-8c29-f6931afdb7b2 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.070312958 +0000 UTC m=+166.606059753 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert") pod "cluster-version-operator-56d8475767-l6hzm" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:07:24.342009 master-0 kubenswrapper[3938]: I0318 13:07:24.341848 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:24.342009 master-0 kubenswrapper[3938]: E0318 13:07:24.342001 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:25.342131 master-0 kubenswrapper[3938]: I0318 13:07:25.342042 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:25.342839 master-0 kubenswrapper[3938]: E0318 13:07:25.342197 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:25.501036 master-0 kubenswrapper[3938]: I0318 13:07:25.500363 3938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=5.500343156 podStartE2EDuration="5.500343156s" podCreationTimestamp="2026-03-18 13:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:24.091733606 +0000 UTC m=+102.627480421" watchObservedRunningTime="2026-03-18 13:07:25.500343156 +0000 UTC m=+104.036089971" Mar 18 13:07:25.501036 master-0 kubenswrapper[3938]: I0318 13:07:25.500491 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 13:07:26.341690 master-0 kubenswrapper[3938]: I0318 13:07:26.341498 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:26.341690 master-0 kubenswrapper[3938]: E0318 13:07:26.341664 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:27.342419 master-0 kubenswrapper[3938]: I0318 13:07:27.342314 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:27.343030 master-0 kubenswrapper[3938]: E0318 13:07:27.342541 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:28.342459 master-0 kubenswrapper[3938]: I0318 13:07:28.342380 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:28.343331 master-0 kubenswrapper[3938]: E0318 13:07:28.342533 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:29.059074 master-0 kubenswrapper[3938]: I0318 13:07:29.058309 3938 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c5rpz"] Mar 18 13:07:29.341528 master-0 kubenswrapper[3938]: I0318 13:07:29.341485 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:29.341621 master-0 kubenswrapper[3938]: E0318 13:07:29.341593 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:29.728623 master-0 kubenswrapper[3938]: I0318 13:07:29.728586 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" event={"ID":"4bc77989-ecfc-4500-92a0-18c2b3b78408","Type":"ContainerStarted","Data":"da555fd9f47f4294570e6ad25c16548ca14ae9ec137f334d01bde47cd422dcf9"} Mar 18 13:07:29.730383 master-0 kubenswrapper[3938]: I0318 13:07:29.730335 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-xcbtb" event={"ID":"eb8907fd-35dd-452a-8032-f2f95a6e553a","Type":"ContainerStarted","Data":"42763f2e1945cdd442dd148f3b0766793cb775dcfcb2d6ede73f97fce1315683"} Mar 18 13:07:29.730383 master-0 kubenswrapper[3938]: I0318 13:07:29.730364 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-xcbtb" event={"ID":"eb8907fd-35dd-452a-8032-f2f95a6e553a","Type":"ContainerStarted","Data":"29d359cca5ab3a3d81be3b3e07d8d654046f88e76bc1b8790777c5eda91e3bcc"} Mar 18 13:07:29.773636 master-0 kubenswrapper[3938]: I0318 13:07:29.773551 3938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=5.773530744 podStartE2EDuration="5.773530744s" podCreationTimestamp="2026-03-18 13:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:29.773350469 +0000 UTC m=+108.309097284" watchObservedRunningTime="2026-03-18 13:07:29.773530744 +0000 UTC m=+108.309277549" Mar 18 13:07:29.947191 master-0 kubenswrapper[3938]: I0318 13:07:29.947126 3938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" podStartSLOduration=4.451542992 podStartE2EDuration="31.94710782s" podCreationTimestamp="2026-03-18 13:06:58 +0000 UTC" firstStartedPulling="2026-03-18 13:07:01.841616414 +0000 UTC m=+80.377363219" lastFinishedPulling="2026-03-18 13:07:29.337181242 +0000 UTC m=+107.872928047" observedRunningTime="2026-03-18 13:07:29.852605188 +0000 UTC m=+108.388352013" watchObservedRunningTime="2026-03-18 13:07:29.94710782 +0000 UTC m=+108.482854645" Mar 18 13:07:29.947261 master-0 kubenswrapper[3938]: I0318 13:07:29.947238 3938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-xcbtb" podStartSLOduration=2.732443715 podStartE2EDuration="27.947233933s" podCreationTimestamp="2026-03-18 13:07:02 +0000 UTC" firstStartedPulling="2026-03-18 13:07:04.155445022 +0000 UTC m=+82.691191827" lastFinishedPulling="2026-03-18 13:07:29.37023524 +0000 UTC m=+107.905982045" observedRunningTime="2026-03-18 13:07:29.946890044 +0000 UTC m=+108.482636879" watchObservedRunningTime="2026-03-18 13:07:29.947233933 +0000 UTC m=+108.482980748" Mar 18 13:07:30.341667 master-0 kubenswrapper[3938]: I0318 13:07:30.341597 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:30.341886 master-0 kubenswrapper[3938]: E0318 13:07:30.341749 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:30.734407 master-0 kubenswrapper[3938]: I0318 13:07:30.734315 3938 generic.go:334] "Generic (PLEG): container finished" podID="c905890a-38c9-4bed-a35c-f28fd3f6065b" containerID="50528ca6cb893f3d6a0ccb3fc08d18692a354d8560172475c81f9ff2a773054c" exitCode=0 Mar 18 13:07:30.735048 master-0 kubenswrapper[3938]: I0318 13:07:30.734425 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" event={"ID":"c905890a-38c9-4bed-a35c-f28fd3f6065b","Type":"ContainerDied","Data":"50528ca6cb893f3d6a0ccb3fc08d18692a354d8560172475c81f9ff2a773054c"} Mar 18 13:07:30.738588 master-0 kubenswrapper[3938]: I0318 13:07:30.738467 3938 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="939081bad25da33d133eff9bd4c3f679efe60bd386467b9c7ea166c2edea2ccd" exitCode=0 Mar 18 13:07:30.739007 master-0 kubenswrapper[3938]: I0318 13:07:30.738885 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpppb" event={"ID":"46ae7b31-c91c-477e-a04a-a3a8541747be","Type":"ContainerDied","Data":"939081bad25da33d133eff9bd4c3f679efe60bd386467b9c7ea166c2edea2ccd"} Mar 18 13:07:30.748808 master-0 kubenswrapper[3938]: I0318 13:07:30.748793 3938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:07:30.831507 master-0 kubenswrapper[3938]: I0318 13:07:30.831457 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-kubelet\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831507 master-0 kubenswrapper[3938]: I0318 13:07:30.831508 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-log-socket\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831710 master-0 kubenswrapper[3938]: I0318 13:07:30.831539 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-env-overrides\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831710 master-0 kubenswrapper[3938]: I0318 13:07:30.831561 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovn-node-metrics-cert\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831710 master-0 kubenswrapper[3938]: I0318 13:07:30.831585 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-cni-bin\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831710 master-0 kubenswrapper[3938]: I0318 13:07:30.831603 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-systemd\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831710 master-0 kubenswrapper[3938]: I0318 13:07:30.831623 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-etc-openvswitch\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831710 master-0 kubenswrapper[3938]: I0318 13:07:30.831643 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-cni-netd\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831710 master-0 kubenswrapper[3938]: I0318 13:07:30.831661 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-openvswitch\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831710 master-0 kubenswrapper[3938]: I0318 13:07:30.831682 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbltn\" (UniqueName: \"kubernetes.io/projected/c905890a-38c9-4bed-a35c-f28fd3f6065b-kube-api-access-cbltn\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831899 master-0 kubenswrapper[3938]: I0318 13:07:30.831721 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-ovn\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831899 master-0 kubenswrapper[3938]: I0318 13:07:30.831744 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovnkube-script-lib\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831899 master-0 kubenswrapper[3938]: I0318 13:07:30.831764 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-run-ovn-kubernetes\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831899 master-0 kubenswrapper[3938]: I0318 13:07:30.831783 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-run-netns\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831899 master-0 kubenswrapper[3938]: I0318 13:07:30.831800 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-node-log\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831899 master-0 kubenswrapper[3938]: I0318 13:07:30.831819 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831899 master-0 kubenswrapper[3938]: I0318 13:07:30.831837 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-slash\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831899 master-0 kubenswrapper[3938]: I0318 13:07:30.831856 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovnkube-config\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831899 master-0 kubenswrapper[3938]: I0318 13:07:30.831874 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-var-lib-openvswitch\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.831899 master-0 kubenswrapper[3938]: I0318 13:07:30.831893 3938 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-systemd-units\") pod \"c905890a-38c9-4bed-a35c-f28fd3f6065b\" (UID: \"c905890a-38c9-4bed-a35c-f28fd3f6065b\") " Mar 18 13:07:30.832354 master-0 kubenswrapper[3938]: I0318 13:07:30.832329 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:30.832416 master-0 kubenswrapper[3938]: I0318 13:07:30.832359 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-log-socket" (OuterVolumeSpecName: "log-socket") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:30.832713 master-0 kubenswrapper[3938]: I0318 13:07:30.832691 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:30.833555 master-0 kubenswrapper[3938]: I0318 13:07:30.833528 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:30.833611 master-0 kubenswrapper[3938]: I0318 13:07:30.833559 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:30.833611 master-0 kubenswrapper[3938]: I0318 13:07:30.833574 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:30.833611 master-0 kubenswrapper[3938]: I0318 13:07:30.833589 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:30.833611 master-0 kubenswrapper[3938]: I0318 13:07:30.833601 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:30.833759 master-0 kubenswrapper[3938]: I0318 13:07:30.833615 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:30.834414 master-0 kubenswrapper[3938]: I0318 13:07:30.834206 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:30.834530 master-0 kubenswrapper[3938]: I0318 13:07:30.834222 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:30.834610 master-0 kubenswrapper[3938]: I0318 13:07:30.834243 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-node-log" (OuterVolumeSpecName: "node-log") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:30.834707 master-0 kubenswrapper[3938]: I0318 13:07:30.834253 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:30.834779 master-0 kubenswrapper[3938]: I0318 13:07:30.834272 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:30.834849 master-0 kubenswrapper[3938]: I0318 13:07:30.834328 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-slash" (OuterVolumeSpecName: "host-slash") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:30.834926 master-0 kubenswrapper[3938]: I0318 13:07:30.834344 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:30.835026 master-0 kubenswrapper[3938]: I0318 13:07:30.834361 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:30.835097 master-0 kubenswrapper[3938]: I0318 13:07:30.834563 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:30.840590 master-0 kubenswrapper[3938]: I0318 13:07:30.840567 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c905890a-38c9-4bed-a35c-f28fd3f6065b-kube-api-access-cbltn" (OuterVolumeSpecName: "kube-api-access-cbltn") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "kube-api-access-cbltn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:07:30.840661 master-0 kubenswrapper[3938]: I0318 13:07:30.840612 3938 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c905890a-38c9-4bed-a35c-f28fd3f6065b" (UID: "c905890a-38c9-4bed-a35c-f28fd3f6065b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:07:30.932343 master-0 kubenswrapper[3938]: I0318 13:07:30.932283 3938 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932343 master-0 kubenswrapper[3938]: I0318 13:07:30.932317 3938 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932343 master-0 kubenswrapper[3938]: I0318 13:07:30.932337 3938 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbltn\" (UniqueName: \"kubernetes.io/projected/c905890a-38c9-4bed-a35c-f28fd3f6065b-kube-api-access-cbltn\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932343 master-0 kubenswrapper[3938]: I0318 13:07:30.932351 3938 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932633 master-0 kubenswrapper[3938]: I0318 13:07:30.932362 3938 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932633 master-0 kubenswrapper[3938]: I0318 13:07:30.932372 3938 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932633 master-0 kubenswrapper[3938]: I0318 13:07:30.932382 3938 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932633 master-0 kubenswrapper[3938]: I0318 13:07:30.932393 3938 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932633 master-0 kubenswrapper[3938]: I0318 13:07:30.932403 3938 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932633 master-0 kubenswrapper[3938]: I0318 13:07:30.932414 3938 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932633 master-0 kubenswrapper[3938]: I0318 13:07:30.932425 3938 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-node-log\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932633 master-0 kubenswrapper[3938]: I0318 13:07:30.932435 3938 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932633 master-0 kubenswrapper[3938]: I0318 13:07:30.932445 3938 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932633 master-0 kubenswrapper[3938]: I0318 13:07:30.932455 3938 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932633 master-0 kubenswrapper[3938]: I0318 13:07:30.932465 3938 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932633 master-0 kubenswrapper[3938]: I0318 13:07:30.932476 3938 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932633 master-0 kubenswrapper[3938]: I0318 13:07:30.932485 3938 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c905890a-38c9-4bed-a35c-f28fd3f6065b-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932633 master-0 kubenswrapper[3938]: I0318 13:07:30.932495 3938 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c905890a-38c9-4bed-a35c-f28fd3f6065b-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932633 master-0 kubenswrapper[3938]: I0318 13:07:30.932504 3938 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:30.932633 master-0 kubenswrapper[3938]: I0318 13:07:30.932515 3938 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c905890a-38c9-4bed-a35c-f28fd3f6065b-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:31.342386 master-0 kubenswrapper[3938]: I0318 13:07:31.342253 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:31.342386 master-0 kubenswrapper[3938]: E0318 13:07:31.342360 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:31.743693 master-0 kubenswrapper[3938]: I0318 13:07:31.743633 3938 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="ab5b83d779ab6537d0a99adbe63763b23469f75fb94b22198d32842d6404c007" exitCode=0 Mar 18 13:07:31.743693 master-0 kubenswrapper[3938]: I0318 13:07:31.743692 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpppb" event={"ID":"46ae7b31-c91c-477e-a04a-a3a8541747be","Type":"ContainerDied","Data":"ab5b83d779ab6537d0a99adbe63763b23469f75fb94b22198d32842d6404c007"} Mar 18 13:07:31.746039 master-0 kubenswrapper[3938]: I0318 13:07:31.745999 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" event={"ID":"c905890a-38c9-4bed-a35c-f28fd3f6065b","Type":"ContainerDied","Data":"189afff678131b27ad291ee0fef6532b57fa0f072d8ef23f7e018e4def02d53e"} Mar 18 13:07:31.746126 master-0 kubenswrapper[3938]: I0318 13:07:31.746043 3938 scope.go:117] "RemoveContainer" containerID="50528ca6cb893f3d6a0ccb3fc08d18692a354d8560172475c81f9ff2a773054c" Mar 18 13:07:31.746161 master-0 kubenswrapper[3938]: I0318 13:07:31.746151 3938 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c5rpz" Mar 18 13:07:31.977536 master-0 kubenswrapper[3938]: I0318 13:07:31.977478 3938 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c5rpz"] Mar 18 13:07:32.040138 master-0 kubenswrapper[3938]: I0318 13:07:32.039980 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv9m7\" (UniqueName: \"kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7\") pod \"network-check-target-zlgkc\" (UID: \"2cad2401-dab1-49f7-870e-a742ebfe323f\") " pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:32.040138 master-0 kubenswrapper[3938]: E0318 13:07:32.040085 3938 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 13:07:32.040138 master-0 kubenswrapper[3938]: E0318 13:07:32.040125 3938 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 13:07:32.040138 master-0 kubenswrapper[3938]: E0318 13:07:32.040138 3938 projected.go:194] Error preparing data for projected volume kube-api-access-rv9m7 for pod openshift-network-diagnostics/network-check-target-zlgkc: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:07:32.040432 master-0 kubenswrapper[3938]: E0318 13:07:32.040195 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7 podName:2cad2401-dab1-49f7-870e-a742ebfe323f nodeName:}" failed. No retries permitted until 2026-03-18 13:08:04.040178774 +0000 UTC m=+142.575925579 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-rv9m7" (UniqueName: "kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7") pod "network-check-target-zlgkc" (UID: "2cad2401-dab1-49f7-870e-a742ebfe323f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:07:32.065533 master-0 kubenswrapper[3938]: I0318 13:07:32.065465 3938 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c5rpz"] Mar 18 13:07:32.076672 master-0 kubenswrapper[3938]: I0318 13:07:32.076625 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pfs29"] Mar 18 13:07:32.076874 master-0 kubenswrapper[3938]: E0318 13:07:32.076721 3938 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c905890a-38c9-4bed-a35c-f28fd3f6065b" containerName="kubecfg-setup" Mar 18 13:07:32.076874 master-0 kubenswrapper[3938]: I0318 13:07:32.076731 3938 state_mem.go:107] "Deleted CPUSet assignment" podUID="c905890a-38c9-4bed-a35c-f28fd3f6065b" containerName="kubecfg-setup" Mar 18 13:07:32.076874 master-0 kubenswrapper[3938]: I0318 13:07:32.076762 3938 memory_manager.go:354] "RemoveStaleState removing state" podUID="c905890a-38c9-4bed-a35c-f28fd3f6065b" containerName="kubecfg-setup" Mar 18 13:07:32.077385 master-0 kubenswrapper[3938]: I0318 13:07:32.077360 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.083567 master-0 kubenswrapper[3938]: I0318 13:07:32.081588 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 13:07:32.083567 master-0 kubenswrapper[3938]: I0318 13:07:32.082149 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 13:07:32.241352 master-0 kubenswrapper[3938]: I0318 13:07:32.241275 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.241670 master-0 kubenswrapper[3938]: I0318 13:07:32.241384 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-log-socket\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.241670 master-0 kubenswrapper[3938]: I0318 13:07:32.241429 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-script-lib\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.241670 master-0 kubenswrapper[3938]: I0318 13:07:32.241476 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-kubelet\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.241670 master-0 kubenswrapper[3938]: I0318 13:07:32.241517 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-slash\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.241670 master-0 kubenswrapper[3938]: I0318 13:07:32.241562 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20dc979a-732b-43b5-acc2-118e4c350470-ovn-node-metrics-cert\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.241670 master-0 kubenswrapper[3938]: I0318 13:07:32.241613 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-netd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.241670 master-0 kubenswrapper[3938]: I0318 13:07:32.241658 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-var-lib-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.242333 master-0 kubenswrapper[3938]: I0318 13:07:32.241699 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-ovn\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.242333 master-0 kubenswrapper[3938]: I0318 13:07:32.241741 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-node-log\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.242333 master-0 kubenswrapper[3938]: I0318 13:07:32.241806 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-config\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.242333 master-0 kubenswrapper[3938]: I0318 13:07:32.241853 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-etc-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.242333 master-0 kubenswrapper[3938]: I0318 13:07:32.241897 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-systemd-units\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.242333 master-0 kubenswrapper[3938]: I0318 13:07:32.241973 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.242333 master-0 kubenswrapper[3938]: I0318 13:07:32.242021 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-bin\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.242333 master-0 kubenswrapper[3938]: I0318 13:07:32.242070 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-systemd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.242333 master-0 kubenswrapper[3938]: I0318 13:07:32.242192 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.243104 master-0 kubenswrapper[3938]: I0318 13:07:32.242474 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnvfd\" (UniqueName: \"kubernetes.io/projected/20dc979a-732b-43b5-acc2-118e4c350470-kube-api-access-wnvfd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.243104 master-0 kubenswrapper[3938]: I0318 13:07:32.242546 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-netns\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.243104 master-0 kubenswrapper[3938]: I0318 13:07:32.242621 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-env-overrides\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.342421 master-0 kubenswrapper[3938]: I0318 13:07:32.342227 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:32.343549 master-0 kubenswrapper[3938]: E0318 13:07:32.343463 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:32.343549 master-0 kubenswrapper[3938]: I0318 13:07:32.343523 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.343928 master-0 kubenswrapper[3938]: I0318 13:07:32.343636 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-log-socket\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.343928 master-0 kubenswrapper[3938]: I0318 13:07:32.343685 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-script-lib\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.343928 master-0 kubenswrapper[3938]: I0318 13:07:32.343736 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-kubelet\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.343928 master-0 kubenswrapper[3938]: I0318 13:07:32.343739 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-log-socket\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344069 master-0 kubenswrapper[3938]: I0318 13:07:32.343921 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344069 master-0 kubenswrapper[3938]: I0318 13:07:32.343991 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-kubelet\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344069 master-0 kubenswrapper[3938]: I0318 13:07:32.344021 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-slash\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344069 master-0 kubenswrapper[3938]: I0318 13:07:32.344045 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20dc979a-732b-43b5-acc2-118e4c350470-ovn-node-metrics-cert\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344069 master-0 kubenswrapper[3938]: I0318 13:07:32.344066 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-netd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344212 master-0 kubenswrapper[3938]: I0318 13:07:32.344087 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-var-lib-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344212 master-0 kubenswrapper[3938]: I0318 13:07:32.344110 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-ovn\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344212 master-0 kubenswrapper[3938]: I0318 13:07:32.344157 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-slash\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344212 master-0 kubenswrapper[3938]: I0318 13:07:32.344181 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-node-log\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344327 master-0 kubenswrapper[3938]: I0318 13:07:32.344220 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-config\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344327 master-0 kubenswrapper[3938]: I0318 13:07:32.344258 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-etc-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344327 master-0 kubenswrapper[3938]: I0318 13:07:32.344283 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-systemd-units\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344327 master-0 kubenswrapper[3938]: I0318 13:07:32.344303 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344327 master-0 kubenswrapper[3938]: I0318 13:07:32.344323 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-bin\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344464 master-0 kubenswrapper[3938]: I0318 13:07:32.344344 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-systemd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344464 master-0 kubenswrapper[3938]: I0318 13:07:32.344371 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344464 master-0 kubenswrapper[3938]: I0318 13:07:32.344397 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnvfd\" (UniqueName: \"kubernetes.io/projected/20dc979a-732b-43b5-acc2-118e4c350470-kube-api-access-wnvfd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344464 master-0 kubenswrapper[3938]: I0318 13:07:32.344419 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-netns\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.344464 master-0 kubenswrapper[3938]: I0318 13:07:32.344441 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-env-overrides\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.345443 master-0 kubenswrapper[3938]: I0318 13:07:32.344763 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.345443 master-0 kubenswrapper[3938]: I0318 13:07:32.344809 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-ovn\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.345443 master-0 kubenswrapper[3938]: I0318 13:07:32.344834 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-netd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.345443 master-0 kubenswrapper[3938]: I0318 13:07:32.344857 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-var-lib-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.345443 master-0 kubenswrapper[3938]: I0318 13:07:32.344879 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-node-log\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.345443 master-0 kubenswrapper[3938]: I0318 13:07:32.345024 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-env-overrides\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.345443 master-0 kubenswrapper[3938]: I0318 13:07:32.345017 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-script-lib\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.345443 master-0 kubenswrapper[3938]: I0318 13:07:32.345092 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-systemd-units\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.345443 master-0 kubenswrapper[3938]: I0318 13:07:32.345117 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-netns\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.345443 master-0 kubenswrapper[3938]: I0318 13:07:32.345141 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-systemd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.345443 master-0 kubenswrapper[3938]: I0318 13:07:32.345283 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.345443 master-0 kubenswrapper[3938]: I0318 13:07:32.345357 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-bin\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.345443 master-0 kubenswrapper[3938]: I0318 13:07:32.345412 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-config\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.345443 master-0 kubenswrapper[3938]: I0318 13:07:32.345419 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-etc-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.348417 master-0 kubenswrapper[3938]: I0318 13:07:32.348384 3938 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c905890a-38c9-4bed-a35c-f28fd3f6065b" path="/var/lib/kubelet/pods/c905890a-38c9-4bed-a35c-f28fd3f6065b/volumes" Mar 18 13:07:32.348886 master-0 kubenswrapper[3938]: I0318 13:07:32.348853 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20dc979a-732b-43b5-acc2-118e4c350470-ovn-node-metrics-cert\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.386885 master-0 kubenswrapper[3938]: I0318 13:07:32.386813 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnvfd\" (UniqueName: \"kubernetes.io/projected/20dc979a-732b-43b5-acc2-118e4c350470-kube-api-access-wnvfd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.393192 master-0 kubenswrapper[3938]: I0318 13:07:32.393155 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:32.405031 master-0 kubenswrapper[3938]: W0318 13:07:32.404962 3938 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20dc979a_732b_43b5_acc2_118e4c350470.slice/crio-41d00c84c5dd2a7b766779b7fb3bdc8cf10f974b3a66441fa0ef99e90cc55075 WatchSource:0}: Error finding container 41d00c84c5dd2a7b766779b7fb3bdc8cf10f974b3a66441fa0ef99e90cc55075: Status 404 returned error can't find the container with id 41d00c84c5dd2a7b766779b7fb3bdc8cf10f974b3a66441fa0ef99e90cc55075 Mar 18 13:07:32.750857 master-0 kubenswrapper[3938]: I0318 13:07:32.750800 3938 generic.go:334] "Generic (PLEG): container finished" podID="20dc979a-732b-43b5-acc2-118e4c350470" containerID="25dc4f55701fc072574e9fbf9afecda3f3ce7724cd8af5190b0641c9037070fb" exitCode=0 Mar 18 13:07:32.752021 master-0 kubenswrapper[3938]: I0318 13:07:32.750857 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" event={"ID":"20dc979a-732b-43b5-acc2-118e4c350470","Type":"ContainerDied","Data":"25dc4f55701fc072574e9fbf9afecda3f3ce7724cd8af5190b0641c9037070fb"} Mar 18 13:07:32.752315 master-0 kubenswrapper[3938]: I0318 13:07:32.752271 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" event={"ID":"20dc979a-732b-43b5-acc2-118e4c350470","Type":"ContainerStarted","Data":"41d00c84c5dd2a7b766779b7fb3bdc8cf10f974b3a66441fa0ef99e90cc55075"} Mar 18 13:07:32.760319 master-0 kubenswrapper[3938]: I0318 13:07:32.760249 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpppb" event={"ID":"46ae7b31-c91c-477e-a04a-a3a8541747be","Type":"ContainerStarted","Data":"39b06a1ec6a41217520b4aca3f9d9a915fef1315fa12de584e3c81aaee59673c"} Mar 18 13:07:32.936712 master-0 kubenswrapper[3938]: I0318 13:07:32.936426 3938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xpppb" podStartSLOduration=4.821050518 podStartE2EDuration="49.936408315s" podCreationTimestamp="2026-03-18 13:06:43 +0000 UTC" firstStartedPulling="2026-03-18 13:06:44.64664204 +0000 UTC m=+63.182388845" lastFinishedPulling="2026-03-18 13:07:29.761999837 +0000 UTC m=+108.297746642" observedRunningTime="2026-03-18 13:07:32.935763727 +0000 UTC m=+111.471510542" watchObservedRunningTime="2026-03-18 13:07:32.936408315 +0000 UTC m=+111.472155120" Mar 18 13:07:33.342073 master-0 kubenswrapper[3938]: I0318 13:07:33.341987 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:33.342299 master-0 kubenswrapper[3938]: E0318 13:07:33.342172 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:33.766459 master-0 kubenswrapper[3938]: I0318 13:07:33.766343 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" event={"ID":"20dc979a-732b-43b5-acc2-118e4c350470","Type":"ContainerStarted","Data":"235692ebcb069cd3b0daa7bc3e69b18f1efe26c2efbade3dfc8c88c68bcbece8"} Mar 18 13:07:33.767477 master-0 kubenswrapper[3938]: I0318 13:07:33.767087 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" event={"ID":"20dc979a-732b-43b5-acc2-118e4c350470","Type":"ContainerStarted","Data":"cc5c9df56403b33b32b76ee3cb8b41cf7c4c57f100d6d03f921033c66c8f4aaf"} Mar 18 13:07:33.767477 master-0 kubenswrapper[3938]: I0318 13:07:33.767124 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" event={"ID":"20dc979a-732b-43b5-acc2-118e4c350470","Type":"ContainerStarted","Data":"12e7bc9ee72b84e1472786cd07b00ce0f9e9e2be5ad295b2cfe738e8f0bf2056"} Mar 18 13:07:33.767477 master-0 kubenswrapper[3938]: I0318 13:07:33.767138 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" event={"ID":"20dc979a-732b-43b5-acc2-118e4c350470","Type":"ContainerStarted","Data":"f8ed35a7df74d908f678b4ea7de221e5bc814222fbb9550adbac881103093724"} Mar 18 13:07:33.767477 master-0 kubenswrapper[3938]: I0318 13:07:33.767150 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" event={"ID":"20dc979a-732b-43b5-acc2-118e4c350470","Type":"ContainerStarted","Data":"125850b4feef3a37393d7154395fcf1cdcd130b936735e6861159c20d30d2910"} Mar 18 13:07:34.341982 master-0 kubenswrapper[3938]: I0318 13:07:34.341696 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:34.341982 master-0 kubenswrapper[3938]: E0318 13:07:34.341889 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:34.776668 master-0 kubenswrapper[3938]: I0318 13:07:34.776605 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" event={"ID":"20dc979a-732b-43b5-acc2-118e4c350470","Type":"ContainerStarted","Data":"fd13d07e6804011f39f87dffdb8d8ca47cbf662a32fa71b4baa1d89a42ea954e"} Mar 18 13:07:35.341456 master-0 kubenswrapper[3938]: I0318 13:07:35.341354 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:35.341793 master-0 kubenswrapper[3938]: E0318 13:07:35.341553 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:36.342108 master-0 kubenswrapper[3938]: I0318 13:07:36.341846 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:36.342108 master-0 kubenswrapper[3938]: E0318 13:07:36.342069 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:36.787738 master-0 kubenswrapper[3938]: I0318 13:07:36.787670 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" event={"ID":"20dc979a-732b-43b5-acc2-118e4c350470","Type":"ContainerStarted","Data":"62d37ee632109b5449a8178cbb1696f1e51740e0f2be1f92b8da12b427e94f2d"} Mar 18 13:07:37.341695 master-0 kubenswrapper[3938]: I0318 13:07:37.341532 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:37.341695 master-0 kubenswrapper[3938]: E0318 13:07:37.341690 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:38.342412 master-0 kubenswrapper[3938]: I0318 13:07:38.342057 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:38.361542 master-0 kubenswrapper[3938]: E0318 13:07:38.342573 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:38.797828 master-0 kubenswrapper[3938]: I0318 13:07:38.797766 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" event={"ID":"20dc979a-732b-43b5-acc2-118e4c350470","Type":"ContainerStarted","Data":"c9350b4679f802c5a1f280422616a0809c602e582448874645594547a54a7258"} Mar 18 13:07:38.799186 master-0 kubenswrapper[3938]: I0318 13:07:38.799148 3938 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:38.799186 master-0 kubenswrapper[3938]: I0318 13:07:38.799184 3938 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:38.799321 master-0 kubenswrapper[3938]: I0318 13:07:38.799194 3938 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:38.822763 master-0 kubenswrapper[3938]: I0318 13:07:38.822408 3938 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:38.831445 master-0 kubenswrapper[3938]: I0318 13:07:38.831349 3938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" podStartSLOduration=7.831324314 podStartE2EDuration="7.831324314s" podCreationTimestamp="2026-03-18 13:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:38.82483486 +0000 UTC m=+117.360581675" watchObservedRunningTime="2026-03-18 13:07:38.831324314 +0000 UTC m=+117.367071119" Mar 18 13:07:38.836217 master-0 kubenswrapper[3938]: I0318 13:07:38.836185 3938 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:07:39.341541 master-0 kubenswrapper[3938]: I0318 13:07:39.341465 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:39.341762 master-0 kubenswrapper[3938]: E0318 13:07:39.341558 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:40.342257 master-0 kubenswrapper[3938]: I0318 13:07:40.342173 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:40.343047 master-0 kubenswrapper[3938]: E0318 13:07:40.342349 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:40.486417 master-0 kubenswrapper[3938]: I0318 13:07:40.486369 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-zlgkc"] Mar 18 13:07:40.486631 master-0 kubenswrapper[3938]: I0318 13:07:40.486502 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:40.486631 master-0 kubenswrapper[3938]: E0318 13:07:40.486612 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:40.494096 master-0 kubenswrapper[3938]: I0318 13:07:40.494057 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kq2j4"] Mar 18 13:07:40.804526 master-0 kubenswrapper[3938]: I0318 13:07:40.804461 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:40.804735 master-0 kubenswrapper[3938]: E0318 13:07:40.804678 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:42.134739 master-0 kubenswrapper[3938]: E0318 13:07:42.134666 3938 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 18 13:07:42.341671 master-0 kubenswrapper[3938]: I0318 13:07:42.341586 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:42.341671 master-0 kubenswrapper[3938]: I0318 13:07:42.341660 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:42.342913 master-0 kubenswrapper[3938]: E0318 13:07:42.342848 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:42.343160 master-0 kubenswrapper[3938]: E0318 13:07:42.343012 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:42.493779 master-0 kubenswrapper[3938]: E0318 13:07:42.493422 3938 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 13:07:44.342144 master-0 kubenswrapper[3938]: I0318 13:07:44.342053 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:44.343152 master-0 kubenswrapper[3938]: I0318 13:07:44.342054 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:44.343152 master-0 kubenswrapper[3938]: E0318 13:07:44.342217 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:44.343152 master-0 kubenswrapper[3938]: E0318 13:07:44.342327 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:46.341556 master-0 kubenswrapper[3938]: I0318 13:07:46.341395 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:46.341556 master-0 kubenswrapper[3938]: I0318 13:07:46.341407 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:46.342088 master-0 kubenswrapper[3938]: E0318 13:07:46.341627 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-zlgkc" podUID="2cad2401-dab1-49f7-870e-a742ebfe323f" Mar 18 13:07:46.342088 master-0 kubenswrapper[3938]: E0318 13:07:46.341650 3938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2j4" podUID="5e691486-8540-4b79-8eed-b0fb829071db" Mar 18 13:07:48.159170 master-0 kubenswrapper[3938]: I0318 13:07:48.159120 3938 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 18 13:07:48.341831 master-0 kubenswrapper[3938]: I0318 13:07:48.341761 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:07:48.341831 master-0 kubenswrapper[3938]: I0318 13:07:48.341796 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:48.344254 master-0 kubenswrapper[3938]: I0318 13:07:48.344196 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 13:07:48.344329 master-0 kubenswrapper[3938]: I0318 13:07:48.344279 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 13:07:48.344329 master-0 kubenswrapper[3938]: I0318 13:07:48.344323 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 13:07:48.803853 master-0 kubenswrapper[3938]: I0318 13:07:48.803734 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5"] Mar 18 13:07:48.804433 master-0 kubenswrapper[3938]: I0318 13:07:48.804375 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:07:48.806568 master-0 kubenswrapper[3938]: I0318 13:07:48.806509 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 13:07:48.807278 master-0 kubenswrapper[3938]: I0318 13:07:48.807209 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 13:07:48.807568 master-0 kubenswrapper[3938]: I0318 13:07:48.807519 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 13:07:48.808658 master-0 kubenswrapper[3938]: I0318 13:07:48.808593 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 13:07:48.822002 master-0 kubenswrapper[3938]: E0318 13:07:48.821954 3938 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 13:07:48.822166 master-0 kubenswrapper[3938]: I0318 13:07:48.822046 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:07:48.822166 master-0 kubenswrapper[3938]: E0318 13:07:48.822078 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs podName:5e691486-8540-4b79-8eed-b0fb829071db nodeName:}" failed. No retries permitted until 2026-03-18 13:08:52.822002606 +0000 UTC m=+191.357749411 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs") pod "network-metrics-daemon-kq2j4" (UID: "5e691486-8540-4b79-8eed-b0fb829071db") : secret "metrics-daemon-secret" not found Mar 18 13:07:48.923063 master-0 kubenswrapper[3938]: I0318 13:07:48.922889 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83a4f641-d28f-42aa-a228-f6086d720fe4-config\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:07:48.923063 master-0 kubenswrapper[3938]: I0318 13:07:48.923100 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83a4f641-d28f-42aa-a228-f6086d720fe4-serving-cert\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:07:48.924154 master-0 kubenswrapper[3938]: I0318 13:07:48.923202 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hb2q\" (UniqueName: \"kubernetes.io/projected/83a4f641-d28f-42aa-a228-f6086d720fe4-kube-api-access-9hb2q\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:07:48.968495 master-0 kubenswrapper[3938]: I0318 13:07:48.968414 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz"] Mar 18 13:07:48.969190 master-0 kubenswrapper[3938]: I0318 13:07:48.969137 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:48.972529 master-0 kubenswrapper[3938]: I0318 13:07:48.972468 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 13:07:48.973010 master-0 kubenswrapper[3938]: I0318 13:07:48.972922 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 13:07:48.973309 master-0 kubenswrapper[3938]: I0318 13:07:48.973285 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 13:07:48.973522 master-0 kubenswrapper[3938]: I0318 13:07:48.972974 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 13:07:48.975579 master-0 kubenswrapper[3938]: I0318 13:07:48.975526 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf"] Mar 18 13:07:48.976297 master-0 kubenswrapper[3938]: I0318 13:07:48.976255 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:07:48.976438 master-0 kubenswrapper[3938]: I0318 13:07:48.976399 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c"] Mar 18 13:07:48.976438 master-0 kubenswrapper[3938]: I0318 13:07:48.976866 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:07:48.978277 master-0 kubenswrapper[3938]: I0318 13:07:48.977566 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8"] Mar 18 13:07:48.978277 master-0 kubenswrapper[3938]: I0318 13:07:48.978157 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8" Mar 18 13:07:48.980189 master-0 kubenswrapper[3938]: I0318 13:07:48.978738 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-4v84b"] Mar 18 13:07:48.980189 master-0 kubenswrapper[3938]: I0318 13:07:48.979282 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:07:48.980189 master-0 kubenswrapper[3938]: I0318 13:07:48.979811 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz"] Mar 18 13:07:48.980601 master-0 kubenswrapper[3938]: I0318 13:07:48.980563 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:07:48.981750 master-0 kubenswrapper[3938]: I0318 13:07:48.981651 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr"] Mar 18 13:07:48.982565 master-0 kubenswrapper[3938]: I0318 13:07:48.982531 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5"] Mar 18 13:07:48.983606 master-0 kubenswrapper[3938]: I0318 13:07:48.983561 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr"] Mar 18 13:07:48.984253 master-0 kubenswrapper[3938]: I0318 13:07:48.984200 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:07:48.984310 master-0 kubenswrapper[3938]: I0318 13:07:48.984222 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:07:48.984725 master-0 kubenswrapper[3938]: I0318 13:07:48.984693 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:07:48.985559 master-0 kubenswrapper[3938]: I0318 13:07:48.985527 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 13:07:48.986283 master-0 kubenswrapper[3938]: I0318 13:07:48.986249 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 13:07:48.986343 master-0 kubenswrapper[3938]: I0318 13:07:48.986278 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 13:07:48.986488 master-0 kubenswrapper[3938]: I0318 13:07:48.986457 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 13:07:48.987682 master-0 kubenswrapper[3938]: I0318 13:07:48.987632 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 13:07:48.988446 master-0 kubenswrapper[3938]: I0318 13:07:48.988399 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 13:07:48.988758 master-0 kubenswrapper[3938]: I0318 13:07:48.988713 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56"] Mar 18 13:07:48.989680 master-0 kubenswrapper[3938]: I0318 13:07:48.989524 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:07:48.991756 master-0 kubenswrapper[3938]: I0318 13:07:48.991689 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 13:07:48.991756 master-0 kubenswrapper[3938]: I0318 13:07:48.991743 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 13:07:48.993033 master-0 kubenswrapper[3938]: I0318 13:07:48.992979 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.993218 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.993222 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.993318 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.993356 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.993529 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.993527 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.993592 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.993602 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.993542 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.993977 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.994736 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.994919 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.995036 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-bqbzx"] Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.995150 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.995242 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.995428 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.995445 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.995652 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 13:07:48.997660 master-0 kubenswrapper[3938]: I0318 13:07:48.996645 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 13:07:48.999451 master-0 kubenswrapper[3938]: I0318 13:07:48.997874 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl"] Mar 18 13:07:48.999451 master-0 kubenswrapper[3938]: I0318 13:07:48.998284 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:07:48.999451 master-0 kubenswrapper[3938]: I0318 13:07:48.999359 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 13:07:49.003691 master-0 kubenswrapper[3938]: I0318 13:07:49.001628 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 13:07:49.003691 master-0 kubenswrapper[3938]: I0318 13:07:49.002053 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 13:07:49.003691 master-0 kubenswrapper[3938]: I0318 13:07:49.002515 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 13:07:49.003691 master-0 kubenswrapper[3938]: I0318 13:07:49.002658 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 13:07:49.003691 master-0 kubenswrapper[3938]: I0318 13:07:49.002789 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 13:07:49.003691 master-0 kubenswrapper[3938]: I0318 13:07:49.003438 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:07:49.003691 master-0 kubenswrapper[3938]: I0318 13:07:49.003581 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 13:07:49.004810 master-0 kubenswrapper[3938]: I0318 13:07:49.003718 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 13:07:49.010186 master-0 kubenswrapper[3938]: I0318 13:07:49.010139 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 13:07:49.011692 master-0 kubenswrapper[3938]: I0318 13:07:49.011657 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8"] Mar 18 13:07:49.015763 master-0 kubenswrapper[3938]: I0318 13:07:49.015638 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9"] Mar 18 13:07:49.016110 master-0 kubenswrapper[3938]: I0318 13:07:49.015817 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:49.016110 master-0 kubenswrapper[3938]: I0318 13:07:49.016109 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:07:49.017875 master-0 kubenswrapper[3938]: I0318 13:07:49.017836 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 13:07:49.018034 master-0 kubenswrapper[3938]: I0318 13:07:49.018020 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 13:07:49.018142 master-0 kubenswrapper[3938]: I0318 13:07:49.018117 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 13:07:49.018272 master-0 kubenswrapper[3938]: I0318 13:07:49.018245 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 13:07:49.018556 master-0 kubenswrapper[3938]: I0318 13:07:49.018528 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 13:07:49.018681 master-0 kubenswrapper[3938]: I0318 13:07:49.018656 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 13:07:49.018802 master-0 kubenswrapper[3938]: I0318 13:07:49.018779 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 13:07:49.019002 master-0 kubenswrapper[3938]: I0318 13:07:49.018974 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 13:07:49.019130 master-0 kubenswrapper[3938]: I0318 13:07:49.019101 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl"] Mar 18 13:07:49.019252 master-0 kubenswrapper[3938]: I0318 13:07:49.019207 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 13:07:49.019829 master-0 kubenswrapper[3938]: I0318 13:07:49.019460 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.021897 master-0 kubenswrapper[3938]: I0318 13:07:49.021845 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 13:07:49.022102 master-0 kubenswrapper[3938]: I0318 13:07:49.021870 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f"] Mar 18 13:07:49.022222 master-0 kubenswrapper[3938]: I0318 13:07:49.021879 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 13:07:49.022337 master-0 kubenswrapper[3938]: I0318 13:07:49.022001 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 13:07:49.022443 master-0 kubenswrapper[3938]: I0318 13:07:49.022337 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 13:07:49.022443 master-0 kubenswrapper[3938]: I0318 13:07:49.022034 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 13:07:49.022443 master-0 kubenswrapper[3938]: I0318 13:07:49.022414 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 13:07:49.022767 master-0 kubenswrapper[3938]: I0318 13:07:49.022030 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 13:07:49.023243 master-0 kubenswrapper[3938]: I0318 13:07:49.023204 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:07:49.024223 master-0 kubenswrapper[3938]: I0318 13:07:49.024160 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-config\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:07:49.024431 master-0 kubenswrapper[3938]: I0318 13:07:49.024238 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ce8e99d-7b02-4bf4-a438-adde851918cb-serving-cert\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:07:49.024598 master-0 kubenswrapper[3938]: I0318 13:07:49.024463 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 13:07:49.024598 master-0 kubenswrapper[3938]: I0318 13:07:49.024561 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8dfw\" (UniqueName: \"kubernetes.io/projected/8ce8e99d-7b02-4bf4-a438-adde851918cb-kube-api-access-r8dfw\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:07:49.024598 master-0 kubenswrapper[3938]: I0318 13:07:49.024599 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 13:07:49.024985 master-0 kubenswrapper[3938]: I0318 13:07:49.024708 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-serving-cert\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.025219 master-0 kubenswrapper[3938]: I0318 13:07:49.025044 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:07:49.025584 master-0 kubenswrapper[3938]: I0318 13:07:49.025503 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83a4f641-d28f-42aa-a228-f6086d720fe4-serving-cert\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:07:49.025972 master-0 kubenswrapper[3938]: I0318 13:07:49.025905 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-client\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.026051 master-0 kubenswrapper[3938]: I0318 13:07:49.025922 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l"] Mar 18 13:07:49.026051 master-0 kubenswrapper[3938]: I0318 13:07:49.026016 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:07:49.026164 master-0 kubenswrapper[3938]: I0318 13:07:49.026073 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-config\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.026227 master-0 kubenswrapper[3938]: I0318 13:07:49.026162 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mlkj\" (UniqueName: \"kubernetes.io/projected/1bf0ea4e-8b08-488f-b252-39580f46b756-kube-api-access-4mlkj\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.026341 master-0 kubenswrapper[3938]: I0318 13:07:49.026300 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hb2q\" (UniqueName: \"kubernetes.io/projected/83a4f641-d28f-42aa-a228-f6086d720fe4-kube-api-access-9hb2q\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:07:49.026410 master-0 kubenswrapper[3938]: I0318 13:07:49.026366 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 13:07:49.026469 master-0 kubenswrapper[3938]: I0318 13:07:49.026372 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:07:49.026535 master-0 kubenswrapper[3938]: I0318 13:07:49.026465 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-ca\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.026535 master-0 kubenswrapper[3938]: I0318 13:07:49.026512 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.026637 master-0 kubenswrapper[3938]: I0318 13:07:49.026540 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:07:49.027673 master-0 kubenswrapper[3938]: I0318 13:07:49.027645 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk"] Mar 18 13:07:49.027908 master-0 kubenswrapper[3938]: I0318 13:07:49.027878 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:07:49.028326 master-0 kubenswrapper[3938]: I0318 13:07:49.028288 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:07:49.028415 master-0 kubenswrapper[3938]: I0318 13:07:49.028371 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzblt\" (UniqueName: \"kubernetes.io/projected/35925474-e3fe-4cff-aad6-d853816618c7-kube-api-access-dzblt\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:07:49.028483 master-0 kubenswrapper[3938]: I0318 13:07:49.028446 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83a4f641-d28f-42aa-a228-f6086d720fe4-config\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:07:49.029389 master-0 kubenswrapper[3938]: I0318 13:07:49.029359 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg"] Mar 18 13:07:49.029822 master-0 kubenswrapper[3938]: I0318 13:07:49.029791 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:07:49.029912 master-0 kubenswrapper[3938]: I0318 13:07:49.029829 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83a4f641-d28f-42aa-a228-f6086d720fe4-config\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:07:49.030967 master-0 kubenswrapper[3938]: I0318 13:07:49.030913 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf"] Mar 18 13:07:49.031495 master-0 kubenswrapper[3938]: I0318 13:07:49.031477 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:07:49.032060 master-0 kubenswrapper[3938]: I0318 13:07:49.031994 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:07:49.032251 master-0 kubenswrapper[3938]: I0318 13:07:49.032014 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 13:07:49.032300 master-0 kubenswrapper[3938]: I0318 13:07:49.032271 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 13:07:49.032451 master-0 kubenswrapper[3938]: I0318 13:07:49.032415 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 13:07:49.033128 master-0 kubenswrapper[3938]: I0318 13:07:49.033088 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb"] Mar 18 13:07:49.033216 master-0 kubenswrapper[3938]: I0318 13:07:49.033193 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 13:07:49.033361 master-0 kubenswrapper[3938]: I0318 13:07:49.033317 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83a4f641-d28f-42aa-a228-f6086d720fe4-serving-cert\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:07:49.033475 master-0 kubenswrapper[3938]: I0318 13:07:49.033447 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 13:07:49.033570 master-0 kubenswrapper[3938]: I0318 13:07:49.033549 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:07:49.033845 master-0 kubenswrapper[3938]: I0318 13:07:49.033819 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:07:49.033971 master-0 kubenswrapper[3938]: I0318 13:07:49.033867 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 13:07:49.033971 master-0 kubenswrapper[3938]: I0318 13:07:49.033891 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 13:07:49.033971 master-0 kubenswrapper[3938]: I0318 13:07:49.033925 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 13:07:49.034164 master-0 kubenswrapper[3938]: I0318 13:07:49.034054 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:07:49.034283 master-0 kubenswrapper[3938]: I0318 13:07:49.034261 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 13:07:49.035611 master-0 kubenswrapper[3938]: I0318 13:07:49.035275 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 13:07:49.035978 master-0 kubenswrapper[3938]: I0318 13:07:49.035885 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb"] Mar 18 13:07:49.037035 master-0 kubenswrapper[3938]: I0318 13:07:49.036990 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:07:49.037189 master-0 kubenswrapper[3938]: I0318 13:07:49.037095 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 13:07:49.039023 master-0 kubenswrapper[3938]: I0318 13:07:49.037924 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 13:07:49.039023 master-0 kubenswrapper[3938]: I0318 13:07:49.038655 3938 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 13:07:49.039562 master-0 kubenswrapper[3938]: I0318 13:07:49.039526 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5"] Mar 18 13:07:49.041765 master-0 kubenswrapper[3938]: I0318 13:07:49.041681 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 13:07:49.042705 master-0 kubenswrapper[3938]: I0318 13:07:49.042666 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 13:07:49.134211 master-0 kubenswrapper[3938]: I0318 13:07:49.128737 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-config\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:07:49.134211 master-0 kubenswrapper[3938]: I0318 13:07:49.128800 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:07:49.134211 master-0 kubenswrapper[3938]: I0318 13:07:49.128840 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8v5n\" (UniqueName: \"kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-kube-api-access-h8v5n\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:07:49.134211 master-0 kubenswrapper[3938]: I0318 13:07:49.128874 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ad580a2-7f58-4d66-adad-0a53d9777655-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:07:49.134211 master-0 kubenswrapper[3938]: I0318 13:07:49.128905 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:49.134211 master-0 kubenswrapper[3938]: I0318 13:07:49.128959 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crbvx\" (UniqueName: \"kubernetes.io/projected/369e9689-e2f6-4276-b096-8db094f8d6ae-kube-api-access-crbvx\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:49.134211 master-0 kubenswrapper[3938]: I0318 13:07:49.128997 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb471665-2b07-48df-9881-3fb663390b23-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:07:49.134211 master-0 kubenswrapper[3938]: I0318 13:07:49.129027 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gv8b\" (UniqueName: \"kubernetes.io/projected/16a930da-d793-486f-bcef-cf042d3c427d-kube-api-access-5gv8b\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:07:49.134211 master-0 kubenswrapper[3938]: I0318 13:07:49.129059 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:07:49.134211 master-0 kubenswrapper[3938]: I0318 13:07:49.129110 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghzrb\" (UniqueName: \"kubernetes.io/projected/47f82c03-65d1-4a6c-ba09-8a00ae778009-kube-api-access-ghzrb\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:07:49.134211 master-0 kubenswrapper[3938]: I0318 13:07:49.129200 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzblt\" (UniqueName: \"kubernetes.io/projected/35925474-e3fe-4cff-aad6-d853816618c7-kube-api-access-dzblt\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:07:49.134211 master-0 kubenswrapper[3938]: I0318 13:07:49.129344 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sqzx\" (UniqueName: \"kubernetes.io/projected/330df925-8429-4b96-9bfe-caa017c21afa-kube-api-access-2sqzx\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:07:49.134211 master-0 kubenswrapper[3938]: I0318 13:07:49.129413 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0213214b-693b-411b-8254-48d7826011eb-serving-cert\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:07:49.134211 master-0 kubenswrapper[3938]: I0318 13:07:49.129469 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:07:49.134820 master-0 kubenswrapper[3938]: I0318 13:07:49.129517 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:07:49.134820 master-0 kubenswrapper[3938]: I0318 13:07:49.129553 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc69w\" (UniqueName: \"kubernetes.io/projected/a01c92f5-7938-437d-8262-11598bd8023c-kube-api-access-qc69w\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:49.134820 master-0 kubenswrapper[3938]: I0318 13:07:49.129585 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:49.134820 master-0 kubenswrapper[3938]: I0318 13:07:49.130338 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:49.134820 master-0 kubenswrapper[3938]: I0318 13:07:49.130196 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:07:49.134820 master-0 kubenswrapper[3938]: I0318 13:07:49.130436 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-config\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:07:49.134820 master-0 kubenswrapper[3938]: I0318 13:07:49.130497 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkdqs\" (UniqueName: \"kubernetes.io/projected/36db10b8-33a2-4b54-85e2-9809eb6bc37d-kube-api-access-bkdqs\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:07:49.134820 master-0 kubenswrapper[3938]: I0318 13:07:49.130550 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c4572e-0b38-4db1-96e5-6a35e29048e7-config\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:07:49.134820 master-0 kubenswrapper[3938]: I0318 13:07:49.130607 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad580a2-7f58-4d66-adad-0a53d9777655-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:07:49.134820 master-0 kubenswrapper[3938]: I0318 13:07:49.130657 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9tzl\" (UniqueName: \"kubernetes.io/projected/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-kube-api-access-z9tzl\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:07:49.134820 master-0 kubenswrapper[3938]: I0318 13:07:49.130715 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:07:49.134820 master-0 kubenswrapper[3938]: I0318 13:07:49.130752 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/16a930da-d793-486f-bcef-cf042d3c427d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:07:49.134820 master-0 kubenswrapper[3938]: I0318 13:07:49.130833 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:07:49.134820 master-0 kubenswrapper[3938]: I0318 13:07:49.130893 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0213214b-693b-411b-8254-48d7826011eb-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:07:49.135340 master-0 kubenswrapper[3938]: I0318 13:07:49.130956 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-config\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.135340 master-0 kubenswrapper[3938]: I0318 13:07:49.130980 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:07:49.135340 master-0 kubenswrapper[3938]: I0318 13:07:49.131009 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:07:49.135340 master-0 kubenswrapper[3938]: I0318 13:07:49.131053 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:07:49.135340 master-0 kubenswrapper[3938]: E0318 13:07:49.131071 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:07:49.135340 master-0 kubenswrapper[3938]: I0318 13:07:49.131092 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-config\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:07:49.135340 master-0 kubenswrapper[3938]: E0318 13:07:49.131170 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert podName:35925474-e3fe-4cff-aad6-d853816618c7 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:49.631138408 +0000 UTC m=+128.166885253 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert") pod "olm-operator-5c9796789-8r4hr" (UID: "35925474-e3fe-4cff-aad6-d853816618c7") : secret "olm-operator-serving-cert" not found Mar 18 13:07:49.135340 master-0 kubenswrapper[3938]: I0318 13:07:49.131234 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mlkj\" (UniqueName: \"kubernetes.io/projected/1bf0ea4e-8b08-488f-b252-39580f46b756-kube-api-access-4mlkj\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.135340 master-0 kubenswrapper[3938]: I0318 13:07:49.131301 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:07:49.135340 master-0 kubenswrapper[3938]: I0318 13:07:49.131565 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-ca\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.135340 master-0 kubenswrapper[3938]: I0318 13:07:49.131586 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.135340 master-0 kubenswrapper[3938]: I0318 13:07:49.131611 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-bound-sa-token\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:07:49.135340 master-0 kubenswrapper[3938]: I0318 13:07:49.131646 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b6rn\" (UniqueName: \"kubernetes.io/projected/5bccf60c-5b07-4f40-8430-12bfb62661c7-kube-api-access-4b6rn\") pod \"csi-snapshot-controller-operator-5f5d689c6b-4s6b8\" (UID: \"5bccf60c-5b07-4f40-8430-12bfb62661c7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8" Mar 18 13:07:49.135340 master-0 kubenswrapper[3938]: I0318 13:07:49.131818 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvxs4\" (UniqueName: \"kubernetes.io/projected/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-kube-api-access-qvxs4\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:07:49.135340 master-0 kubenswrapper[3938]: I0318 13:07:49.131853 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb471665-2b07-48df-9881-3fb663390b23-config\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:07:49.135992 master-0 kubenswrapper[3938]: I0318 13:07:49.131884 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:07:49.135992 master-0 kubenswrapper[3938]: I0318 13:07:49.131927 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ea3c78-dede-468f-89a5-551133f794c5-config\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:07:49.135992 master-0 kubenswrapper[3938]: I0318 13:07:49.131968 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-config\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.135992 master-0 kubenswrapper[3938]: I0318 13:07:49.131974 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f8xk\" (UniqueName: \"kubernetes.io/projected/cb471665-2b07-48df-9881-3fb663390b23-kube-api-access-6f8xk\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:07:49.135992 master-0 kubenswrapper[3938]: I0318 13:07:49.132019 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhqk9\" (UniqueName: \"kubernetes.io/projected/da6a763d-2777-40c4-ae1f-c77ced406ea2-kube-api-access-lhqk9\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:07:49.135992 master-0 kubenswrapper[3938]: I0318 13:07:49.132042 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/369e9689-e2f6-4276-b096-8db094f8d6ae-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:49.135992 master-0 kubenswrapper[3938]: I0318 13:07:49.132060 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2c4572e-0b38-4db1-96e5-6a35e29048e7-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:07:49.135992 master-0 kubenswrapper[3938]: I0318 13:07:49.132081 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-config\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:07:49.135992 master-0 kubenswrapper[3938]: I0318 13:07:49.132103 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:07:49.135992 master-0 kubenswrapper[3938]: I0318 13:07:49.132121 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:07:49.135992 master-0 kubenswrapper[3938]: I0318 13:07:49.132145 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-ca\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.135992 master-0 kubenswrapper[3938]: I0318 13:07:49.132163 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:07:49.135992 master-0 kubenswrapper[3938]: I0318 13:07:49.132235 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5mgr\" (UniqueName: \"kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-kube-api-access-j5mgr\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:07:49.135992 master-0 kubenswrapper[3938]: I0318 13:07:49.132302 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-serving-cert\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.135992 master-0 kubenswrapper[3938]: I0318 13:07:49.132408 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ea3c78-dede-468f-89a5-551133f794c5-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:07:49.136404 master-0 kubenswrapper[3938]: I0318 13:07:49.132467 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ce8e99d-7b02-4bf4-a438-adde851918cb-serving-cert\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:07:49.136404 master-0 kubenswrapper[3938]: I0318 13:07:49.132499 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8dfw\" (UniqueName: \"kubernetes.io/projected/8ce8e99d-7b02-4bf4-a438-adde851918cb-kube-api-access-r8dfw\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:07:49.136404 master-0 kubenswrapper[3938]: I0318 13:07:49.132532 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93ea3c78-dede-468f-89a5-551133f794c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:07:49.136404 master-0 kubenswrapper[3938]: I0318 13:07:49.132641 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.136404 master-0 kubenswrapper[3938]: I0318 13:07:49.132698 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:07:49.136404 master-0 kubenswrapper[3938]: I0318 13:07:49.132741 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-images\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:49.136404 master-0 kubenswrapper[3938]: I0318 13:07:49.132779 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/16a930da-d793-486f-bcef-cf042d3c427d-operand-assets\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:07:49.136404 master-0 kubenswrapper[3938]: I0318 13:07:49.132809 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:07:49.136404 master-0 kubenswrapper[3938]: I0318 13:07:49.132859 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-client\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.136404 master-0 kubenswrapper[3938]: I0318 13:07:49.132892 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-config\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:49.136404 master-0 kubenswrapper[3938]: I0318 13:07:49.132918 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:49.136404 master-0 kubenswrapper[3938]: I0318 13:07:49.132985 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgh46\" (UniqueName: \"kubernetes.io/projected/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-kube-api-access-rgh46\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:07:49.136404 master-0 kubenswrapper[3938]: I0318 13:07:49.133018 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:07:49.136404 master-0 kubenswrapper[3938]: I0318 13:07:49.133090 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw64j\" (UniqueName: \"kubernetes.io/projected/1ad580a2-7f58-4d66-adad-0a53d9777655-kube-api-access-cw64j\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:07:49.136404 master-0 kubenswrapper[3938]: I0318 13:07:49.133258 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:07:49.136824 master-0 kubenswrapper[3938]: I0318 13:07:49.133334 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2c4572e-0b38-4db1-96e5-6a35e29048e7-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:07:49.136824 master-0 kubenswrapper[3938]: I0318 13:07:49.133394 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f2b92a53-0b61-4e1d-a306-f9a498e48b38-trusted-ca\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:07:49.136824 master-0 kubenswrapper[3938]: I0318 13:07:49.133496 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcm8d\" (UniqueName: \"kubernetes.io/projected/0213214b-693b-411b-8254-48d7826011eb-kube-api-access-xcm8d\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:07:49.136824 master-0 kubenswrapper[3938]: I0318 13:07:49.135219 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:07:49.136824 master-0 kubenswrapper[3938]: I0318 13:07:49.136711 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ce8e99d-7b02-4bf4-a438-adde851918cb-serving-cert\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:07:49.137529 master-0 kubenswrapper[3938]: I0318 13:07:49.137492 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-serving-cert\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.137843 master-0 kubenswrapper[3938]: I0318 13:07:49.137813 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-client\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.260544 master-0 kubenswrapper[3938]: I0318 13:07:49.252191 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f8xk\" (UniqueName: \"kubernetes.io/projected/cb471665-2b07-48df-9881-3fb663390b23-kube-api-access-6f8xk\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:07:49.260544 master-0 kubenswrapper[3938]: I0318 13:07:49.252232 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhqk9\" (UniqueName: \"kubernetes.io/projected/da6a763d-2777-40c4-ae1f-c77ced406ea2-kube-api-access-lhqk9\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:07:49.260544 master-0 kubenswrapper[3938]: I0318 13:07:49.252252 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/369e9689-e2f6-4276-b096-8db094f8d6ae-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:49.260544 master-0 kubenswrapper[3938]: I0318 13:07:49.252269 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:07:49.260544 master-0 kubenswrapper[3938]: I0318 13:07:49.252308 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2c4572e-0b38-4db1-96e5-6a35e29048e7-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:07:49.260544 master-0 kubenswrapper[3938]: I0318 13:07:49.252331 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-config\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:07:49.260544 master-0 kubenswrapper[3938]: I0318 13:07:49.252356 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:07:49.260544 master-0 kubenswrapper[3938]: I0318 13:07:49.252381 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:07:49.260544 master-0 kubenswrapper[3938]: I0318 13:07:49.252399 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5mgr\" (UniqueName: \"kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-kube-api-access-j5mgr\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:07:49.260544 master-0 kubenswrapper[3938]: I0318 13:07:49.252433 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ea3c78-dede-468f-89a5-551133f794c5-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:07:49.260544 master-0 kubenswrapper[3938]: I0318 13:07:49.252461 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93ea3c78-dede-468f-89a5-551133f794c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:07:49.260544 master-0 kubenswrapper[3938]: I0318 13:07:49.252479 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:07:49.260544 master-0 kubenswrapper[3938]: I0318 13:07:49.252497 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-images\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:49.260544 master-0 kubenswrapper[3938]: I0318 13:07:49.252522 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/16a930da-d793-486f-bcef-cf042d3c427d-operand-assets\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:07:49.260544 master-0 kubenswrapper[3938]: I0318 13:07:49.252539 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:07:49.261714 master-0 kubenswrapper[3938]: I0318 13:07:49.252557 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-config\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:49.261714 master-0 kubenswrapper[3938]: I0318 13:07:49.252574 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:49.261714 master-0 kubenswrapper[3938]: I0318 13:07:49.252591 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgh46\" (UniqueName: \"kubernetes.io/projected/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-kube-api-access-rgh46\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:07:49.261714 master-0 kubenswrapper[3938]: I0318 13:07:49.252608 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw64j\" (UniqueName: \"kubernetes.io/projected/1ad580a2-7f58-4d66-adad-0a53d9777655-kube-api-access-cw64j\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:07:49.261714 master-0 kubenswrapper[3938]: I0318 13:07:49.252625 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:07:49.261714 master-0 kubenswrapper[3938]: I0318 13:07:49.252641 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2c4572e-0b38-4db1-96e5-6a35e29048e7-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:07:49.261714 master-0 kubenswrapper[3938]: I0318 13:07:49.252660 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f2b92a53-0b61-4e1d-a306-f9a498e48b38-trusted-ca\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:07:49.261714 master-0 kubenswrapper[3938]: I0318 13:07:49.252684 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcm8d\" (UniqueName: \"kubernetes.io/projected/0213214b-693b-411b-8254-48d7826011eb-kube-api-access-xcm8d\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:07:49.261714 master-0 kubenswrapper[3938]: I0318 13:07:49.252703 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crbvx\" (UniqueName: \"kubernetes.io/projected/369e9689-e2f6-4276-b096-8db094f8d6ae-kube-api-access-crbvx\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:49.261714 master-0 kubenswrapper[3938]: I0318 13:07:49.252719 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-config\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:07:49.261714 master-0 kubenswrapper[3938]: I0318 13:07:49.252743 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8v5n\" (UniqueName: \"kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-kube-api-access-h8v5n\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:07:49.261714 master-0 kubenswrapper[3938]: I0318 13:07:49.252760 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ad580a2-7f58-4d66-adad-0a53d9777655-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:07:49.261714 master-0 kubenswrapper[3938]: I0318 13:07:49.252779 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:49.261714 master-0 kubenswrapper[3938]: I0318 13:07:49.252796 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:07:49.261714 master-0 kubenswrapper[3938]: I0318 13:07:49.252813 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb471665-2b07-48df-9881-3fb663390b23-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:07:49.262314 master-0 kubenswrapper[3938]: I0318 13:07:49.252832 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gv8b\" (UniqueName: \"kubernetes.io/projected/16a930da-d793-486f-bcef-cf042d3c427d-kube-api-access-5gv8b\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:07:49.262314 master-0 kubenswrapper[3938]: I0318 13:07:49.252855 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghzrb\" (UniqueName: \"kubernetes.io/projected/47f82c03-65d1-4a6c-ba09-8a00ae778009-kube-api-access-ghzrb\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:07:49.262314 master-0 kubenswrapper[3938]: I0318 13:07:49.252873 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sqzx\" (UniqueName: \"kubernetes.io/projected/330df925-8429-4b96-9bfe-caa017c21afa-kube-api-access-2sqzx\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:07:49.262314 master-0 kubenswrapper[3938]: I0318 13:07:49.252889 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0213214b-693b-411b-8254-48d7826011eb-serving-cert\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:07:49.262314 master-0 kubenswrapper[3938]: I0318 13:07:49.252906 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:07:49.262314 master-0 kubenswrapper[3938]: I0318 13:07:49.252924 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:07:49.262314 master-0 kubenswrapper[3938]: I0318 13:07:49.252955 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc69w\" (UniqueName: \"kubernetes.io/projected/a01c92f5-7938-437d-8262-11598bd8023c-kube-api-access-qc69w\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:49.262314 master-0 kubenswrapper[3938]: I0318 13:07:49.252974 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkdqs\" (UniqueName: \"kubernetes.io/projected/36db10b8-33a2-4b54-85e2-9809eb6bc37d-kube-api-access-bkdqs\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:07:49.262314 master-0 kubenswrapper[3938]: I0318 13:07:49.252991 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:49.262314 master-0 kubenswrapper[3938]: I0318 13:07:49.253007 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:49.262314 master-0 kubenswrapper[3938]: I0318 13:07:49.253031 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c4572e-0b38-4db1-96e5-6a35e29048e7-config\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:07:49.262314 master-0 kubenswrapper[3938]: I0318 13:07:49.253047 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9tzl\" (UniqueName: \"kubernetes.io/projected/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-kube-api-access-z9tzl\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:07:49.262314 master-0 kubenswrapper[3938]: I0318 13:07:49.253063 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad580a2-7f58-4d66-adad-0a53d9777655-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:07:49.262314 master-0 kubenswrapper[3938]: I0318 13:07:49.253087 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/16a930da-d793-486f-bcef-cf042d3c427d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:07:49.262314 master-0 kubenswrapper[3938]: I0318 13:07:49.253103 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0213214b-693b-411b-8254-48d7826011eb-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:07:49.262880 master-0 kubenswrapper[3938]: I0318 13:07:49.253118 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:07:49.262880 master-0 kubenswrapper[3938]: I0318 13:07:49.253143 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:07:49.262880 master-0 kubenswrapper[3938]: I0318 13:07:49.253158 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:07:49.262880 master-0 kubenswrapper[3938]: I0318 13:07:49.253177 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:07:49.262880 master-0 kubenswrapper[3938]: I0318 13:07:49.253192 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:07:49.262880 master-0 kubenswrapper[3938]: I0318 13:07:49.253232 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb471665-2b07-48df-9881-3fb663390b23-config\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:07:49.262880 master-0 kubenswrapper[3938]: I0318 13:07:49.253248 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-bound-sa-token\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:07:49.262880 master-0 kubenswrapper[3938]: I0318 13:07:49.253265 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b6rn\" (UniqueName: \"kubernetes.io/projected/5bccf60c-5b07-4f40-8430-12bfb62661c7-kube-api-access-4b6rn\") pod \"csi-snapshot-controller-operator-5f5d689c6b-4s6b8\" (UID: \"5bccf60c-5b07-4f40-8430-12bfb62661c7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8" Mar 18 13:07:49.262880 master-0 kubenswrapper[3938]: I0318 13:07:49.253284 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvxs4\" (UniqueName: \"kubernetes.io/projected/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-kube-api-access-qvxs4\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:07:49.262880 master-0 kubenswrapper[3938]: I0318 13:07:49.253301 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:07:49.262880 master-0 kubenswrapper[3938]: I0318 13:07:49.253319 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ea3c78-dede-468f-89a5-551133f794c5-config\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:07:49.262880 master-0 kubenswrapper[3938]: E0318 13:07:49.256126 3938 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:49.262880 master-0 kubenswrapper[3938]: E0318 13:07:49.256225 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:07:49.756201277 +0000 UTC m=+128.291948082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:49.262880 master-0 kubenswrapper[3938]: I0318 13:07:49.257760 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/369e9689-e2f6-4276-b096-8db094f8d6ae-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:49.262880 master-0 kubenswrapper[3938]: E0318 13:07:49.259156 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:07:49.263662 master-0 kubenswrapper[3938]: E0318 13:07:49.259253 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert podName:47f82c03-65d1-4a6c-ba09-8a00ae778009 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:49.759228643 +0000 UTC m=+128.294975468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert") pod "catalog-operator-68f85b4d6c-p9k56" (UID: "47f82c03-65d1-4a6c-ba09-8a00ae778009") : secret "catalog-operator-serving-cert" not found Mar 18 13:07:49.263662 master-0 kubenswrapper[3938]: I0318 13:07:49.259955 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-config\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:07:49.263662 master-0 kubenswrapper[3938]: I0318 13:07:49.260750 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ea3c78-dede-468f-89a5-551133f794c5-config\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:07:49.263662 master-0 kubenswrapper[3938]: E0318 13:07:49.260871 3938 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:49.263662 master-0 kubenswrapper[3938]: E0318 13:07:49.260927 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls podName:ee1eb80b-5a76-443f-a534-54d5bdc0c98a nodeName:}" failed. No retries permitted until 2026-03-18 13:07:49.760911021 +0000 UTC m=+128.296657826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-jfdn5" (UID: "ee1eb80b-5a76-443f-a534-54d5bdc0c98a") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:49.263662 master-0 kubenswrapper[3938]: I0318 13:07:49.261074 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:07:49.263662 master-0 kubenswrapper[3938]: E0318 13:07:49.261289 3938 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:49.263662 master-0 kubenswrapper[3938]: E0318 13:07:49.261368 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls podName:f2b92a53-0b61-4e1d-a306-f9a498e48b38 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:49.761344143 +0000 UTC m=+128.297091048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls") pod "ingress-operator-66b84d69b-xwqsb" (UID: "f2b92a53-0b61-4e1d-a306-f9a498e48b38") : secret "metrics-tls" not found Mar 18 13:07:49.263662 master-0 kubenswrapper[3938]: I0318 13:07:49.261418 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-config\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:49.263662 master-0 kubenswrapper[3938]: E0318 13:07:49.261493 3938 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:07:49.263662 master-0 kubenswrapper[3938]: E0318 13:07:49.261535 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:07:49.761521108 +0000 UTC m=+128.297268013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "node-tuning-operator-tls" not found Mar 18 13:07:49.263662 master-0 kubenswrapper[3938]: I0318 13:07:49.261810 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-images\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:49.263662 master-0 kubenswrapper[3938]: E0318 13:07:49.261909 3938 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:49.263662 master-0 kubenswrapper[3938]: E0318 13:07:49.261956 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls podName:da6a763d-2777-40c4-ae1f-c77ced406ea2 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:49.76192539 +0000 UTC m=+128.297672195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls") pod "dns-operator-9c5679d8f-bqbzx" (UID: "da6a763d-2777-40c4-ae1f-c77ced406ea2") : secret "metrics-tls" not found Mar 18 13:07:49.263662 master-0 kubenswrapper[3938]: I0318 13:07:49.262509 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/16a930da-d793-486f-bcef-cf042d3c427d-operand-assets\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:07:49.264381 master-0 kubenswrapper[3938]: I0318 13:07:49.263539 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2c4572e-0b38-4db1-96e5-6a35e29048e7-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:07:49.264381 master-0 kubenswrapper[3938]: E0318 13:07:49.264035 3938 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:49.264381 master-0 kubenswrapper[3938]: E0318 13:07:49.264078 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:49.76406817 +0000 UTC m=+128.299814975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:49.264381 master-0 kubenswrapper[3938]: E0318 13:07:49.264184 3938 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:49.264381 master-0 kubenswrapper[3938]: E0318 13:07:49.264206 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:49.764199834 +0000 UTC m=+128.299946639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:49.264381 master-0 kubenswrapper[3938]: E0318 13:07:49.264240 3938 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:07:49.264381 master-0 kubenswrapper[3938]: E0318 13:07:49.264259 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs podName:906c0fd3-3bcd-4c6c-8505-b3517bae06b4 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:49.764253906 +0000 UTC m=+128.300000711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zvsmb" (UID: "906c0fd3-3bcd-4c6c-8505-b3517bae06b4") : secret "multus-admission-controller-secret" not found Mar 18 13:07:49.264762 master-0 kubenswrapper[3938]: I0318 13:07:49.264641 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0213214b-693b-411b-8254-48d7826011eb-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:07:49.264762 master-0 kubenswrapper[3938]: E0318 13:07:49.264661 3938 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:07:49.264762 master-0 kubenswrapper[3938]: E0318 13:07:49.264715 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls podName:73c93ee3-cf14-4fea-b2a7-ccfb56e55be4 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:49.764699578 +0000 UTC m=+128.300446383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-n995f" (UID: "73c93ee3-cf14-4fea-b2a7-ccfb56e55be4") : secret "image-registry-operator-tls" not found Mar 18 13:07:49.265350 master-0 kubenswrapper[3938]: I0318 13:07:49.265311 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb471665-2b07-48df-9881-3fb663390b23-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:07:49.265471 master-0 kubenswrapper[3938]: I0318 13:07:49.265436 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-config\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:07:49.265593 master-0 kubenswrapper[3938]: I0318 13:07:49.265556 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:07:49.266109 master-0 kubenswrapper[3938]: I0318 13:07:49.266083 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad580a2-7f58-4d66-adad-0a53d9777655-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:07:49.266186 master-0 kubenswrapper[3938]: E0318 13:07:49.266161 3938 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:07:49.266228 master-0 kubenswrapper[3938]: E0318 13:07:49.266201 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics podName:330df925-8429-4b96-9bfe-caa017c21afa nodeName:}" failed. No retries permitted until 2026-03-18 13:07:49.766187841 +0000 UTC m=+128.301934646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-4v84b" (UID: "330df925-8429-4b96-9bfe-caa017c21afa") : secret "marketplace-operator-metrics" not found Mar 18 13:07:49.266616 master-0 kubenswrapper[3938]: E0318 13:07:49.266589 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:07:49.266822 master-0 kubenswrapper[3938]: I0318 13:07:49.266750 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c4572e-0b38-4db1-96e5-6a35e29048e7-config\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:07:49.267712 master-0 kubenswrapper[3938]: I0318 13:07:49.267677 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f2b92a53-0b61-4e1d-a306-f9a498e48b38-trusted-ca\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:07:49.267980 master-0 kubenswrapper[3938]: I0318 13:07:49.267916 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:07:49.268459 master-0 kubenswrapper[3938]: I0318 13:07:49.268289 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb471665-2b07-48df-9881-3fb663390b23-config\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:07:49.268596 master-0 kubenswrapper[3938]: I0318 13:07:49.268562 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/16a930da-d793-486f-bcef-cf042d3c427d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:07:49.268833 master-0 kubenswrapper[3938]: I0318 13:07:49.268804 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:07:49.270038 master-0 kubenswrapper[3938]: I0318 13:07:49.269984 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ad580a2-7f58-4d66-adad-0a53d9777655-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:07:49.270150 master-0 kubenswrapper[3938]: I0318 13:07:49.270046 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0213214b-693b-411b-8254-48d7826011eb-serving-cert\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:07:49.271052 master-0 kubenswrapper[3938]: I0318 13:07:49.271006 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ea3c78-dede-468f-89a5-551133f794c5-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:07:49.272040 master-0 kubenswrapper[3938]: I0318 13:07:49.271995 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:07:49.326152 master-0 kubenswrapper[3938]: E0318 13:07:49.326021 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert podName:36db10b8-33a2-4b54-85e2-9809eb6bc37d nodeName:}" failed. No retries permitted until 2026-03-18 13:07:49.766621993 +0000 UTC m=+128.302368878 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-kbpvr" (UID: "36db10b8-33a2-4b54-85e2-9809eb6bc37d") : secret "package-server-manager-serving-cert" not found Mar 18 13:07:49.436464 master-0 kubenswrapper[3938]: I0318 13:07:49.436345 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz"] Mar 18 13:07:49.436464 master-0 kubenswrapper[3938]: I0318 13:07:49.436441 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf"] Mar 18 13:07:49.438153 master-0 kubenswrapper[3938]: I0318 13:07:49.437554 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8"] Mar 18 13:07:49.438903 master-0 kubenswrapper[3938]: I0318 13:07:49.438815 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c"] Mar 18 13:07:49.442083 master-0 kubenswrapper[3938]: I0318 13:07:49.440053 3938 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-tvnss"] Mar 18 13:07:49.442083 master-0 kubenswrapper[3938]: I0318 13:07:49.440619 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:07:49.444762 master-0 kubenswrapper[3938]: I0318 13:07:49.444610 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5"] Mar 18 13:07:49.444975 master-0 kubenswrapper[3938]: I0318 13:07:49.444898 3938 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 13:07:49.445884 master-0 kubenswrapper[3938]: I0318 13:07:49.445289 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56"] Mar 18 13:07:49.446227 master-0 kubenswrapper[3938]: I0318 13:07:49.446187 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr"] Mar 18 13:07:49.447122 master-0 kubenswrapper[3938]: I0318 13:07:49.447090 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-bqbzx"] Mar 18 13:07:49.447798 master-0 kubenswrapper[3938]: I0318 13:07:49.447765 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk"] Mar 18 13:07:49.448483 master-0 kubenswrapper[3938]: I0318 13:07:49.448452 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb"] Mar 18 13:07:49.450325 master-0 kubenswrapper[3938]: I0318 13:07:49.450261 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-4v84b"] Mar 18 13:07:49.450404 master-0 kubenswrapper[3938]: I0318 13:07:49.450383 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl"] Mar 18 13:07:49.451676 master-0 kubenswrapper[3938]: I0318 13:07:49.451606 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg"] Mar 18 13:07:49.453178 master-0 kubenswrapper[3938]: I0318 13:07:49.453133 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hb2q\" (UniqueName: \"kubernetes.io/projected/83a4f641-d28f-42aa-a228-f6086d720fe4-kube-api-access-9hb2q\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:07:49.562036 master-0 kubenswrapper[3938]: I0318 13:07:49.561924 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-host-slash\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:07:49.562241 master-0 kubenswrapper[3938]: I0318 13:07:49.562104 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-iptables-alerter-script\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:07:49.562241 master-0 kubenswrapper[3938]: I0318 13:07:49.562204 3938 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwfnk\" (UniqueName: \"kubernetes.io/projected/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-kube-api-access-qwfnk\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:07:49.592190 master-0 kubenswrapper[3938]: I0318 13:07:49.592075 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz"] Mar 18 13:07:49.592190 master-0 kubenswrapper[3938]: I0318 13:07:49.592124 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb"] Mar 18 13:07:49.598623 master-0 kubenswrapper[3938]: I0318 13:07:49.598589 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8dfw\" (UniqueName: \"kubernetes.io/projected/8ce8e99d-7b02-4bf4-a438-adde851918cb-kube-api-access-r8dfw\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:07:49.600864 master-0 kubenswrapper[3938]: I0318 13:07:49.600834 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcm8d\" (UniqueName: \"kubernetes.io/projected/0213214b-693b-411b-8254-48d7826011eb-kube-api-access-xcm8d\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:07:49.605394 master-0 kubenswrapper[3938]: I0318 13:07:49.605349 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crbvx\" (UniqueName: \"kubernetes.io/projected/369e9689-e2f6-4276-b096-8db094f8d6ae-kube-api-access-crbvx\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:49.609404 master-0 kubenswrapper[3938]: I0318 13:07:49.609367 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:07:49.609650 master-0 kubenswrapper[3938]: I0318 13:07:49.609618 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2c4572e-0b38-4db1-96e5-6a35e29048e7-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:07:49.612503 master-0 kubenswrapper[3938]: I0318 13:07:49.612458 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:07:49.613094 master-0 kubenswrapper[3938]: I0318 13:07:49.613059 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw64j\" (UniqueName: \"kubernetes.io/projected/1ad580a2-7f58-4d66-adad-0a53d9777655-kube-api-access-cw64j\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:07:49.613450 master-0 kubenswrapper[3938]: I0318 13:07:49.613410 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mlkj\" (UniqueName: \"kubernetes.io/projected/1bf0ea4e-8b08-488f-b252-39580f46b756-kube-api-access-4mlkj\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.613674 master-0 kubenswrapper[3938]: I0318 13:07:49.613645 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghzrb\" (UniqueName: \"kubernetes.io/projected/47f82c03-65d1-4a6c-ba09-8a00ae778009-kube-api-access-ghzrb\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:07:49.617052 master-0 kubenswrapper[3938]: I0318 13:07:49.617015 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-bound-sa-token\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:07:49.618765 master-0 kubenswrapper[3938]: I0318 13:07:49.618736 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8v5n\" (UniqueName: \"kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-kube-api-access-h8v5n\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:07:49.619356 master-0 kubenswrapper[3938]: I0318 13:07:49.619317 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5mgr\" (UniqueName: \"kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-kube-api-access-j5mgr\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:07:49.619599 master-0 kubenswrapper[3938]: I0318 13:07:49.619571 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9tzl\" (UniqueName: \"kubernetes.io/projected/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-kube-api-access-z9tzl\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:07:49.619777 master-0 kubenswrapper[3938]: I0318 13:07:49.619686 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkdqs\" (UniqueName: \"kubernetes.io/projected/36db10b8-33a2-4b54-85e2-9809eb6bc37d-kube-api-access-bkdqs\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:07:49.620029 master-0 kubenswrapper[3938]: I0318 13:07:49.619983 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvxs4\" (UniqueName: \"kubernetes.io/projected/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-kube-api-access-qvxs4\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:07:49.621048 master-0 kubenswrapper[3938]: I0318 13:07:49.621021 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgh46\" (UniqueName: \"kubernetes.io/projected/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-kube-api-access-rgh46\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:07:49.622087 master-0 kubenswrapper[3938]: I0318 13:07:49.622059 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhqk9\" (UniqueName: \"kubernetes.io/projected/da6a763d-2777-40c4-ae1f-c77ced406ea2-kube-api-access-lhqk9\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:07:49.622177 master-0 kubenswrapper[3938]: I0318 13:07:49.622145 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f8xk\" (UniqueName: \"kubernetes.io/projected/cb471665-2b07-48df-9881-3fb663390b23-kube-api-access-6f8xk\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:07:49.622341 master-0 kubenswrapper[3938]: I0318 13:07:49.622310 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sqzx\" (UniqueName: \"kubernetes.io/projected/330df925-8429-4b96-9bfe-caa017c21afa-kube-api-access-2sqzx\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:07:49.622496 master-0 kubenswrapper[3938]: I0318 13:07:49.622430 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b6rn\" (UniqueName: \"kubernetes.io/projected/5bccf60c-5b07-4f40-8430-12bfb62661c7-kube-api-access-4b6rn\") pod \"csi-snapshot-controller-operator-5f5d689c6b-4s6b8\" (UID: \"5bccf60c-5b07-4f40-8430-12bfb62661c7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8" Mar 18 13:07:49.622590 master-0 kubenswrapper[3938]: I0318 13:07:49.622552 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93ea3c78-dede-468f-89a5-551133f794c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:07:49.622725 master-0 kubenswrapper[3938]: I0318 13:07:49.622704 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gv8b\" (UniqueName: \"kubernetes.io/projected/16a930da-d793-486f-bcef-cf042d3c427d-kube-api-access-5gv8b\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:07:49.622841 master-0 kubenswrapper[3938]: I0318 13:07:49.622818 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzblt\" (UniqueName: \"kubernetes.io/projected/35925474-e3fe-4cff-aad6-d853816618c7-kube-api-access-dzblt\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:07:49.624213 master-0 kubenswrapper[3938]: I0318 13:07:49.624185 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc69w\" (UniqueName: \"kubernetes.io/projected/a01c92f5-7938-437d-8262-11598bd8023c-kube-api-access-qc69w\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:49.630096 master-0 kubenswrapper[3938]: I0318 13:07:49.630068 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:07:49.662096 master-0 kubenswrapper[3938]: I0318 13:07:49.662043 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:07:49.665252 master-0 kubenswrapper[3938]: I0318 13:07:49.662652 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwfnk\" (UniqueName: \"kubernetes.io/projected/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-kube-api-access-qwfnk\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:07:49.665252 master-0 kubenswrapper[3938]: I0318 13:07:49.662753 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-host-slash\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:07:49.665252 master-0 kubenswrapper[3938]: I0318 13:07:49.662800 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-iptables-alerter-script\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:07:49.665252 master-0 kubenswrapper[3938]: I0318 13:07:49.662920 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-host-slash\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:07:49.665252 master-0 kubenswrapper[3938]: I0318 13:07:49.662965 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:07:49.665252 master-0 kubenswrapper[3938]: I0318 13:07:49.663704 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-iptables-alerter-script\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:07:49.665252 master-0 kubenswrapper[3938]: E0318 13:07:49.663790 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:07:49.665252 master-0 kubenswrapper[3938]: E0318 13:07:49.663852 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert podName:35925474-e3fe-4cff-aad6-d853816618c7 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:50.663820634 +0000 UTC m=+129.199567439 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert") pod "olm-operator-5c9796789-8r4hr" (UID: "35925474-e3fe-4cff-aad6-d853816618c7") : secret "olm-operator-serving-cert" not found Mar 18 13:07:49.674144 master-0 kubenswrapper[3938]: I0318 13:07:49.672634 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9"] Mar 18 13:07:49.674144 master-0 kubenswrapper[3938]: I0318 13:07:49.672683 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl"] Mar 18 13:07:49.674144 master-0 kubenswrapper[3938]: I0318 13:07:49.672696 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf"] Mar 18 13:07:49.675471 master-0 kubenswrapper[3938]: I0318 13:07:49.675440 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l"] Mar 18 13:07:49.676192 master-0 kubenswrapper[3938]: I0318 13:07:49.676178 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8"] Mar 18 13:07:49.676283 master-0 kubenswrapper[3938]: I0318 13:07:49.675612 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8" Mar 18 13:07:49.676654 master-0 kubenswrapper[3938]: I0318 13:07:49.676619 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr"] Mar 18 13:07:49.681114 master-0 kubenswrapper[3938]: I0318 13:07:49.681093 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f"] Mar 18 13:07:49.692780 master-0 kubenswrapper[3938]: I0318 13:07:49.692747 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:07:49.721228 master-0 kubenswrapper[3938]: I0318 13:07:49.721160 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:07:49.762403 master-0 kubenswrapper[3938]: I0318 13:07:49.762349 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: I0318 13:07:49.766293 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: I0318 13:07:49.766328 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: I0318 13:07:49.766357 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: I0318 13:07:49.766373 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: I0318 13:07:49.766406 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: I0318 13:07:49.766430 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: I0318 13:07:49.766449 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: I0318 13:07:49.766468 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: I0318 13:07:49.766492 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: I0318 13:07:49.766510 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: I0318 13:07:49.766527 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: E0318 13:07:49.766615 3938 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: E0318 13:07:49.766639 3938 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: E0318 13:07:49.766688 3938 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: E0318 13:07:49.766752 3938 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: E0318 13:07:49.766755 3938 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: E0318 13:07:49.766790 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: E0318 13:07:49.766821 3938 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: E0318 13:07:49.766831 3938 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:07:49.770660 master-0 kubenswrapper[3938]: E0318 13:07:49.766859 3938 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:07:49.772287 master-0 kubenswrapper[3938]: E0318 13:07:49.766649 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls podName:73c93ee3-cf14-4fea-b2a7-ccfb56e55be4 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:50.766637032 +0000 UTC m=+129.302383837 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-n995f" (UID: "73c93ee3-cf14-4fea-b2a7-ccfb56e55be4") : secret "image-registry-operator-tls" not found Mar 18 13:07:49.772287 master-0 kubenswrapper[3938]: E0318 13:07:49.766879 3938 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:49.772287 master-0 kubenswrapper[3938]: E0318 13:07:49.766885 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:07:50.766878079 +0000 UTC m=+129.302624884 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:49.772287 master-0 kubenswrapper[3938]: E0318 13:07:49.766897 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs podName:906c0fd3-3bcd-4c6c-8505-b3517bae06b4 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:50.766891139 +0000 UTC m=+129.302637944 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zvsmb" (UID: "906c0fd3-3bcd-4c6c-8505-b3517bae06b4") : secret "multus-admission-controller-secret" not found Mar 18 13:07:49.772287 master-0 kubenswrapper[3938]: E0318 13:07:49.766907 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls podName:f2b92a53-0b61-4e1d-a306-f9a498e48b38 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:50.766902889 +0000 UTC m=+129.302649694 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls") pod "ingress-operator-66b84d69b-xwqsb" (UID: "f2b92a53-0b61-4e1d-a306-f9a498e48b38") : secret "metrics-tls" not found Mar 18 13:07:49.772287 master-0 kubenswrapper[3938]: E0318 13:07:49.766916 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:50.76691168 +0000 UTC m=+129.302658475 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:49.772287 master-0 kubenswrapper[3938]: E0318 13:07:49.766925 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert podName:47f82c03-65d1-4a6c-ba09-8a00ae778009 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:50.76692137 +0000 UTC m=+129.302668175 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert") pod "catalog-operator-68f85b4d6c-p9k56" (UID: "47f82c03-65d1-4a6c-ba09-8a00ae778009") : secret "catalog-operator-serving-cert" not found Mar 18 13:07:49.772287 master-0 kubenswrapper[3938]: E0318 13:07:49.766952 3938 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:49.772287 master-0 kubenswrapper[3938]: E0318 13:07:49.766966 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:50.76693044 +0000 UTC m=+129.302677245 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:49.772287 master-0 kubenswrapper[3938]: E0318 13:07:49.766978 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:07:50.766973951 +0000 UTC m=+129.302720756 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "node-tuning-operator-tls" not found Mar 18 13:07:49.772287 master-0 kubenswrapper[3938]: E0318 13:07:49.766987 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics podName:330df925-8429-4b96-9bfe-caa017c21afa nodeName:}" failed. No retries permitted until 2026-03-18 13:07:50.766983142 +0000 UTC m=+129.302729937 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-4v84b" (UID: "330df925-8429-4b96-9bfe-caa017c21afa") : secret "marketplace-operator-metrics" not found Mar 18 13:07:49.773152 master-0 kubenswrapper[3938]: E0318 13:07:49.766999 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls podName:da6a763d-2777-40c4-ae1f-c77ced406ea2 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:50.766993442 +0000 UTC m=+129.302740247 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls") pod "dns-operator-9c5679d8f-bqbzx" (UID: "da6a763d-2777-40c4-ae1f-c77ced406ea2") : secret "metrics-tls" not found Mar 18 13:07:49.773152 master-0 kubenswrapper[3938]: E0318 13:07:49.767010 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls podName:ee1eb80b-5a76-443f-a534-54d5bdc0c98a nodeName:}" failed. No retries permitted until 2026-03-18 13:07:50.767005172 +0000 UTC m=+129.302751977 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-jfdn5" (UID: "ee1eb80b-5a76-443f-a534-54d5bdc0c98a") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:49.773152 master-0 kubenswrapper[3938]: I0318 13:07:49.769947 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:07:49.795058 master-0 kubenswrapper[3938]: I0318 13:07:49.794772 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:07:49.817828 master-0 kubenswrapper[3938]: I0318 13:07:49.817577 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:07:49.823188 master-0 kubenswrapper[3938]: I0318 13:07:49.823144 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:07:49.830536 master-0 kubenswrapper[3938]: I0318 13:07:49.830496 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:07:49.861794 master-0 kubenswrapper[3938]: I0318 13:07:49.861735 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:07:49.867571 master-0 kubenswrapper[3938]: I0318 13:07:49.867530 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:07:49.867755 master-0 kubenswrapper[3938]: E0318 13:07:49.867731 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:07:49.867809 master-0 kubenswrapper[3938]: E0318 13:07:49.867793 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert podName:36db10b8-33a2-4b54-85e2-9809eb6bc37d nodeName:}" failed. No retries permitted until 2026-03-18 13:07:50.867775212 +0000 UTC m=+129.403522017 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-kbpvr" (UID: "36db10b8-33a2-4b54-85e2-9809eb6bc37d") : secret "package-server-manager-serving-cert" not found Mar 18 13:07:49.873291 master-0 kubenswrapper[3938]: I0318 13:07:49.873265 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwfnk\" (UniqueName: \"kubernetes.io/projected/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-kube-api-access-qwfnk\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:07:50.057646 master-0 kubenswrapper[3938]: I0318 13:07:50.057568 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:07:50.682859 master-0 kubenswrapper[3938]: I0318 13:07:50.682759 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:07:50.683722 master-0 kubenswrapper[3938]: E0318 13:07:50.683003 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:07:50.683722 master-0 kubenswrapper[3938]: E0318 13:07:50.683091 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert podName:35925474-e3fe-4cff-aad6-d853816618c7 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:52.683070906 +0000 UTC m=+131.218817781 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert") pod "olm-operator-5c9796789-8r4hr" (UID: "35925474-e3fe-4cff-aad6-d853816618c7") : secret "olm-operator-serving-cert" not found Mar 18 13:07:50.784245 master-0 kubenswrapper[3938]: I0318 13:07:50.784163 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:07:50.784478 master-0 kubenswrapper[3938]: I0318 13:07:50.784266 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:50.784478 master-0 kubenswrapper[3938]: E0318 13:07:50.784412 3938 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:50.784478 master-0 kubenswrapper[3938]: E0318 13:07:50.784412 3938 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:50.784570 master-0 kubenswrapper[3938]: E0318 13:07:50.784485 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:52.784462933 +0000 UTC m=+131.320209778 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:50.784570 master-0 kubenswrapper[3938]: E0318 13:07:50.784520 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls podName:ee1eb80b-5a76-443f-a534-54d5bdc0c98a nodeName:}" failed. No retries permitted until 2026-03-18 13:07:52.784498034 +0000 UTC m=+131.320244849 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-jfdn5" (UID: "ee1eb80b-5a76-443f-a534-54d5bdc0c98a") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:50.784637 master-0 kubenswrapper[3938]: I0318 13:07:50.784574 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:50.784668 master-0 kubenswrapper[3938]: I0318 13:07:50.784636 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:07:50.784697 master-0 kubenswrapper[3938]: I0318 13:07:50.784677 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:07:50.784727 master-0 kubenswrapper[3938]: I0318 13:07:50.784713 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:07:50.784755 master-0 kubenswrapper[3938]: I0318 13:07:50.784743 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:07:50.784813 master-0 kubenswrapper[3938]: I0318 13:07:50.784783 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:07:50.784849 master-0 kubenswrapper[3938]: I0318 13:07:50.784822 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:50.784876 master-0 kubenswrapper[3938]: I0318 13:07:50.784846 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:07:50.784905 master-0 kubenswrapper[3938]: I0318 13:07:50.784888 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:50.785024 master-0 kubenswrapper[3938]: E0318 13:07:50.784998 3938 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:50.785068 master-0 kubenswrapper[3938]: E0318 13:07:50.785037 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:07:52.785027189 +0000 UTC m=+131.320774014 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:50.785098 master-0 kubenswrapper[3938]: E0318 13:07:50.785090 3938 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:50.785127 master-0 kubenswrapper[3938]: E0318 13:07:50.785114 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:52.785106102 +0000 UTC m=+131.320852917 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:50.785170 master-0 kubenswrapper[3938]: E0318 13:07:50.785155 3938 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:07:50.785200 master-0 kubenswrapper[3938]: E0318 13:07:50.785183 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls podName:73c93ee3-cf14-4fea-b2a7-ccfb56e55be4 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:52.785176034 +0000 UTC m=+131.320922849 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-n995f" (UID: "73c93ee3-cf14-4fea-b2a7-ccfb56e55be4") : secret "image-registry-operator-tls" not found Mar 18 13:07:50.785235 master-0 kubenswrapper[3938]: E0318 13:07:50.785224 3938 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:07:50.785261 master-0 kubenswrapper[3938]: E0318 13:07:50.785246 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs podName:906c0fd3-3bcd-4c6c-8505-b3517bae06b4 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:52.785238955 +0000 UTC m=+131.320985770 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zvsmb" (UID: "906c0fd3-3bcd-4c6c-8505-b3517bae06b4") : secret "multus-admission-controller-secret" not found Mar 18 13:07:50.785299 master-0 kubenswrapper[3938]: E0318 13:07:50.785284 3938 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:07:50.785329 master-0 kubenswrapper[3938]: E0318 13:07:50.785313 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics podName:330df925-8429-4b96-9bfe-caa017c21afa nodeName:}" failed. No retries permitted until 2026-03-18 13:07:52.785305667 +0000 UTC m=+131.321052482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-4v84b" (UID: "330df925-8429-4b96-9bfe-caa017c21afa") : secret "marketplace-operator-metrics" not found Mar 18 13:07:50.785368 master-0 kubenswrapper[3938]: E0318 13:07:50.785355 3938 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:50.785425 master-0 kubenswrapper[3938]: E0318 13:07:50.785413 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls podName:f2b92a53-0b61-4e1d-a306-f9a498e48b38 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:52.78540047 +0000 UTC m=+131.321147285 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls") pod "ingress-operator-66b84d69b-xwqsb" (UID: "f2b92a53-0b61-4e1d-a306-f9a498e48b38") : secret "metrics-tls" not found Mar 18 13:07:50.785464 master-0 kubenswrapper[3938]: E0318 13:07:50.785457 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:07:50.785490 master-0 kubenswrapper[3938]: E0318 13:07:50.785481 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert podName:47f82c03-65d1-4a6c-ba09-8a00ae778009 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:52.785474412 +0000 UTC m=+131.321221227 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert") pod "catalog-operator-68f85b4d6c-p9k56" (UID: "47f82c03-65d1-4a6c-ba09-8a00ae778009") : secret "catalog-operator-serving-cert" not found Mar 18 13:07:50.785539 master-0 kubenswrapper[3938]: E0318 13:07:50.785524 3938 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:07:50.785566 master-0 kubenswrapper[3938]: E0318 13:07:50.785555 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:07:52.785547724 +0000 UTC m=+131.321294549 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "node-tuning-operator-tls" not found Mar 18 13:07:50.785602 master-0 kubenswrapper[3938]: E0318 13:07:50.785594 3938 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:50.785642 master-0 kubenswrapper[3938]: E0318 13:07:50.785616 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls podName:da6a763d-2777-40c4-ae1f-c77ced406ea2 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:52.785609366 +0000 UTC m=+131.321356191 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls") pod "dns-operator-9c5679d8f-bqbzx" (UID: "da6a763d-2777-40c4-ae1f-c77ced406ea2") : secret "metrics-tls" not found Mar 18 13:07:50.836604 master-0 kubenswrapper[3938]: I0318 13:07:50.836499 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-tvnss" event={"ID":"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971","Type":"ContainerStarted","Data":"7425e13d893a722522240c3707c6140f8bfd0028da6287165144b7322ebf69c4"} Mar 18 13:07:50.886623 master-0 kubenswrapper[3938]: I0318 13:07:50.886473 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:07:50.886863 master-0 kubenswrapper[3938]: E0318 13:07:50.886713 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:07:50.886863 master-0 kubenswrapper[3938]: E0318 13:07:50.886815 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert podName:36db10b8-33a2-4b54-85e2-9809eb6bc37d nodeName:}" failed. No retries permitted until 2026-03-18 13:07:52.886795288 +0000 UTC m=+131.422542173 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-kbpvr" (UID: "36db10b8-33a2-4b54-85e2-9809eb6bc37d") : secret "package-server-manager-serving-cert" not found Mar 18 13:07:52.744306 master-0 kubenswrapper[3938]: I0318 13:07:52.743945 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:07:52.745124 master-0 kubenswrapper[3938]: E0318 13:07:52.744157 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:07:52.745124 master-0 kubenswrapper[3938]: E0318 13:07:52.744414 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert podName:35925474-e3fe-4cff-aad6-d853816618c7 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:56.74439242 +0000 UTC m=+135.280139225 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert") pod "olm-operator-5c9796789-8r4hr" (UID: "35925474-e3fe-4cff-aad6-d853816618c7") : secret "olm-operator-serving-cert" not found Mar 18 13:07:52.845264 master-0 kubenswrapper[3938]: I0318 13:07:52.845169 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:07:52.845264 master-0 kubenswrapper[3938]: I0318 13:07:52.845233 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:07:52.845264 master-0 kubenswrapper[3938]: I0318 13:07:52.845263 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:07:52.845784 master-0 kubenswrapper[3938]: E0318 13:07:52.845390 3938 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:07:52.845784 master-0 kubenswrapper[3938]: E0318 13:07:52.845411 3938 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:07:52.845784 master-0 kubenswrapper[3938]: E0318 13:07:52.845441 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs podName:906c0fd3-3bcd-4c6c-8505-b3517bae06b4 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:56.845422967 +0000 UTC m=+135.381169782 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zvsmb" (UID: "906c0fd3-3bcd-4c6c-8505-b3517bae06b4") : secret "multus-admission-controller-secret" not found Mar 18 13:07:52.845784 master-0 kubenswrapper[3938]: E0318 13:07:52.845490 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls podName:73c93ee3-cf14-4fea-b2a7-ccfb56e55be4 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:56.845472868 +0000 UTC m=+135.381219673 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-n995f" (UID: "73c93ee3-cf14-4fea-b2a7-ccfb56e55be4") : secret "image-registry-operator-tls" not found Mar 18 13:07:52.845784 master-0 kubenswrapper[3938]: I0318 13:07:52.845559 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:07:52.845784 master-0 kubenswrapper[3938]: E0318 13:07:52.845588 3938 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:07:52.845784 master-0 kubenswrapper[3938]: I0318 13:07:52.845614 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:07:52.845784 master-0 kubenswrapper[3938]: I0318 13:07:52.845638 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:52.845784 master-0 kubenswrapper[3938]: E0318 13:07:52.845667 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics podName:330df925-8429-4b96-9bfe-caa017c21afa nodeName:}" failed. No retries permitted until 2026-03-18 13:07:56.845646673 +0000 UTC m=+135.381393478 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-4v84b" (UID: "330df925-8429-4b96-9bfe-caa017c21afa") : secret "marketplace-operator-metrics" not found Mar 18 13:07:52.845784 master-0 kubenswrapper[3938]: E0318 13:07:52.845696 3938 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:07:52.845784 master-0 kubenswrapper[3938]: E0318 13:07:52.845724 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:07:56.845715295 +0000 UTC m=+135.381462160 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "node-tuning-operator-tls" not found Mar 18 13:07:52.845784 master-0 kubenswrapper[3938]: I0318 13:07:52.845695 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:07:52.845784 master-0 kubenswrapper[3938]: E0318 13:07:52.845738 3938 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:52.845784 master-0 kubenswrapper[3938]: E0318 13:07:52.845759 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls podName:da6a763d-2777-40c4-ae1f-c77ced406ea2 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:56.845752506 +0000 UTC m=+135.381499311 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls") pod "dns-operator-9c5679d8f-bqbzx" (UID: "da6a763d-2777-40c4-ae1f-c77ced406ea2") : secret "metrics-tls" not found Mar 18 13:07:52.845784 master-0 kubenswrapper[3938]: I0318 13:07:52.845772 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:52.845784 master-0 kubenswrapper[3938]: E0318 13:07:52.845784 3938 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:52.846788 master-0 kubenswrapper[3938]: E0318 13:07:52.845804 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls podName:f2b92a53-0b61-4e1d-a306-f9a498e48b38 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:56.845797467 +0000 UTC m=+135.381544272 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls") pod "ingress-operator-66b84d69b-xwqsb" (UID: "f2b92a53-0b61-4e1d-a306-f9a498e48b38") : secret "metrics-tls" not found Mar 18 13:07:52.846788 master-0 kubenswrapper[3938]: I0318 13:07:52.845819 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:07:52.846788 master-0 kubenswrapper[3938]: E0318 13:07:52.845827 3938 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:52.846788 master-0 kubenswrapper[3938]: E0318 13:07:52.845846 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:07:56.845841389 +0000 UTC m=+135.381588194 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:52.846788 master-0 kubenswrapper[3938]: E0318 13:07:52.845876 3938 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:52.846788 master-0 kubenswrapper[3938]: E0318 13:07:52.845895 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls podName:ee1eb80b-5a76-443f-a534-54d5bdc0c98a nodeName:}" failed. No retries permitted until 2026-03-18 13:07:56.84588971 +0000 UTC m=+135.381636515 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-jfdn5" (UID: "ee1eb80b-5a76-443f-a534-54d5bdc0c98a") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:52.846788 master-0 kubenswrapper[3938]: E0318 13:07:52.845897 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:07:52.846788 master-0 kubenswrapper[3938]: I0318 13:07:52.845923 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:52.846788 master-0 kubenswrapper[3938]: E0318 13:07:52.845958 3938 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:52.846788 master-0 kubenswrapper[3938]: E0318 13:07:52.845977 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:56.845971272 +0000 UTC m=+135.381718077 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:52.846788 master-0 kubenswrapper[3938]: I0318 13:07:52.845990 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:52.846788 master-0 kubenswrapper[3938]: E0318 13:07:52.846084 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert podName:47f82c03-65d1-4a6c-ba09-8a00ae778009 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:56.846078085 +0000 UTC m=+135.381824890 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert") pod "catalog-operator-68f85b4d6c-p9k56" (UID: "47f82c03-65d1-4a6c-ba09-8a00ae778009") : secret "catalog-operator-serving-cert" not found Mar 18 13:07:52.846788 master-0 kubenswrapper[3938]: E0318 13:07:52.846140 3938 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:52.846788 master-0 kubenswrapper[3938]: E0318 13:07:52.846194 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:56.846182298 +0000 UTC m=+135.381929113 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:52.899026 master-0 kubenswrapper[3938]: I0318 13:07:52.892024 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c"] Mar 18 13:07:52.947507 master-0 kubenswrapper[3938]: I0318 13:07:52.947447 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:07:52.947995 master-0 kubenswrapper[3938]: E0318 13:07:52.947904 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:07:52.948236 master-0 kubenswrapper[3938]: E0318 13:07:52.948210 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert podName:36db10b8-33a2-4b54-85e2-9809eb6bc37d nodeName:}" failed. No retries permitted until 2026-03-18 13:07:56.948179633 +0000 UTC m=+135.483926478 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-kbpvr" (UID: "36db10b8-33a2-4b54-85e2-9809eb6bc37d") : secret "package-server-manager-serving-cert" not found Mar 18 13:07:53.849283 master-0 kubenswrapper[3938]: I0318 13:07:53.849165 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" event={"ID":"8ce8e99d-7b02-4bf4-a438-adde851918cb","Type":"ContainerStarted","Data":"13d61ed6ba86dc97c981be717623436660fa98958fd1c017e06b3a4ec064f769"} Mar 18 13:07:55.424106 master-0 kubenswrapper[3938]: I0318 13:07:55.424008 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl"] Mar 18 13:07:55.432182 master-0 kubenswrapper[3938]: I0318 13:07:55.432120 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk"] Mar 18 13:07:55.436349 master-0 kubenswrapper[3938]: I0318 13:07:55.436283 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5"] Mar 18 13:07:55.438497 master-0 kubenswrapper[3938]: I0318 13:07:55.438462 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf"] Mar 18 13:07:55.442279 master-0 kubenswrapper[3938]: I0318 13:07:55.441567 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8"] Mar 18 13:07:55.443780 master-0 kubenswrapper[3938]: I0318 13:07:55.443737 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9"] Mar 18 13:07:55.445163 master-0 kubenswrapper[3938]: I0318 13:07:55.445135 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg"] Mar 18 13:07:55.446522 master-0 kubenswrapper[3938]: I0318 13:07:55.446477 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl"] Mar 18 13:07:55.448499 master-0 kubenswrapper[3938]: I0318 13:07:55.448321 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l"] Mar 18 13:07:55.476781 master-0 kubenswrapper[3938]: I0318 13:07:55.476635 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz"] Mar 18 13:07:55.481213 master-0 kubenswrapper[3938]: I0318 13:07:55.481174 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf"] Mar 18 13:07:55.857967 master-0 kubenswrapper[3938]: I0318 13:07:55.857896 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" event={"ID":"93ea3c78-dede-468f-89a5-551133f794c5","Type":"ContainerStarted","Data":"95165b81eb17d7c2c28d6429f46259466ca6d0bdd237f4679d2704ef98282f29"} Mar 18 13:07:55.859406 master-0 kubenswrapper[3938]: I0318 13:07:55.859379 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" event={"ID":"c2c4572e-0b38-4db1-96e5-6a35e29048e7","Type":"ContainerStarted","Data":"c0d9adef366d9f45b6f81e678d5b5bc6f1e841f8a49fa5033e91c2416ca478ff"} Mar 18 13:07:55.860846 master-0 kubenswrapper[3938]: I0318 13:07:55.860820 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" event={"ID":"0213214b-693b-411b-8254-48d7826011eb","Type":"ContainerStarted","Data":"8207c4419d89bbef00d1216664ff051dff0278775861444c1650cbc77aa43b89"} Mar 18 13:07:55.862352 master-0 kubenswrapper[3938]: I0318 13:07:55.862319 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" event={"ID":"83a4f641-d28f-42aa-a228-f6086d720fe4","Type":"ContainerStarted","Data":"cad2dea033992ed333b90156af54dbe232cb8e77ea3617a7c7559f870c46bf61"} Mar 18 13:07:55.863577 master-0 kubenswrapper[3938]: I0318 13:07:55.863553 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" event={"ID":"1bf0ea4e-8b08-488f-b252-39580f46b756","Type":"ContainerStarted","Data":"513bdda53b682c95f37d2cf2baf57e4a5453627fbbd061d754ec2aa3ba42bd1d"} Mar 18 13:07:55.864580 master-0 kubenswrapper[3938]: I0318 13:07:55.864550 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" event={"ID":"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41","Type":"ContainerStarted","Data":"8385307c04cfef148742b5dc0fc754e1e2dc3ea11d3ddc8ec5d773d4246273b6"} Mar 18 13:07:55.865575 master-0 kubenswrapper[3938]: I0318 13:07:55.865549 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" event={"ID":"16a930da-d793-486f-bcef-cf042d3c427d","Type":"ContainerStarted","Data":"2bf4b712cae2c0ee4c12f11a2e43506e7388879dae59520e9018e8abfe05f277"} Mar 18 13:07:55.866711 master-0 kubenswrapper[3938]: I0318 13:07:55.866687 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8" event={"ID":"5bccf60c-5b07-4f40-8430-12bfb62661c7","Type":"ContainerStarted","Data":"0e106377d9d72c29f7e269aa5cfc10e2e71a7440e3f167ac189e9be6ef45a160"} Mar 18 13:07:55.867823 master-0 kubenswrapper[3938]: I0318 13:07:55.867800 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" event={"ID":"1ad580a2-7f58-4d66-adad-0a53d9777655","Type":"ContainerStarted","Data":"b71043687eba73124ca20af7839f57eeabe61687cf875f84c32f9f4a213acec8"} Mar 18 13:07:55.868923 master-0 kubenswrapper[3938]: I0318 13:07:55.868899 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" event={"ID":"cb471665-2b07-48df-9881-3fb663390b23","Type":"ContainerStarted","Data":"f3d8252ff99e6f3ec6168c39c11836a42f248fb2decc89a0e7aa350479c27f97"} Mar 18 13:07:55.870175 master-0 kubenswrapper[3938]: I0318 13:07:55.870151 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" event={"ID":"c9a9baa5-9334-47dc-8d0c-eafc96a679b3","Type":"ContainerStarted","Data":"8c177b73cce0c7f3cc26e5c3b6432debd234f03c681f0879af00f2a71a8d7119"} Mar 18 13:07:56.788745 master-0 kubenswrapper[3938]: I0318 13:07:56.788585 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:07:56.789324 master-0 kubenswrapper[3938]: E0318 13:07:56.788823 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:07:56.789324 master-0 kubenswrapper[3938]: E0318 13:07:56.788909 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert podName:35925474-e3fe-4cff-aad6-d853816618c7 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:04.788888673 +0000 UTC m=+143.324635478 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert") pod "olm-operator-5c9796789-8r4hr" (UID: "35925474-e3fe-4cff-aad6-d853816618c7") : secret "olm-operator-serving-cert" not found Mar 18 13:07:56.889297 master-0 kubenswrapper[3938]: I0318 13:07:56.889232 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:07:56.889297 master-0 kubenswrapper[3938]: I0318 13:07:56.889275 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:07:56.889297 master-0 kubenswrapper[3938]: I0318 13:07:56.889308 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:07:56.889661 master-0 kubenswrapper[3938]: I0318 13:07:56.889336 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:07:56.889661 master-0 kubenswrapper[3938]: I0318 13:07:56.889353 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:56.889661 master-0 kubenswrapper[3938]: I0318 13:07:56.889370 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:07:56.889661 master-0 kubenswrapper[3938]: E0318 13:07:56.889464 3938 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:56.889661 master-0 kubenswrapper[3938]: E0318 13:07:56.889502 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls podName:f2b92a53-0b61-4e1d-a306-f9a498e48b38 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:04.889489808 +0000 UTC m=+143.425236613 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls") pod "ingress-operator-66b84d69b-xwqsb" (UID: "f2b92a53-0b61-4e1d-a306-f9a498e48b38") : secret "metrics-tls" not found Mar 18 13:07:56.889661 master-0 kubenswrapper[3938]: E0318 13:07:56.889545 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:07:56.889661 master-0 kubenswrapper[3938]: E0318 13:07:56.889561 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert podName:47f82c03-65d1-4a6c-ba09-8a00ae778009 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:04.88955556 +0000 UTC m=+143.425302365 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert") pod "catalog-operator-68f85b4d6c-p9k56" (UID: "47f82c03-65d1-4a6c-ba09-8a00ae778009") : secret "catalog-operator-serving-cert" not found Mar 18 13:07:56.889661 master-0 kubenswrapper[3938]: E0318 13:07:56.889550 3938 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:07:56.889661 master-0 kubenswrapper[3938]: E0318 13:07:56.889605 3938 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:07:56.889661 master-0 kubenswrapper[3938]: E0318 13:07:56.889622 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:08:04.889617241 +0000 UTC m=+143.425364036 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "node-tuning-operator-tls" not found Mar 18 13:07:56.889661 master-0 kubenswrapper[3938]: I0318 13:07:56.889636 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:07:56.890171 master-0 kubenswrapper[3938]: E0318 13:07:56.889689 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics podName:330df925-8429-4b96-9bfe-caa017c21afa nodeName:}" failed. No retries permitted until 2026-03-18 13:08:04.889683503 +0000 UTC m=+143.425430298 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-4v84b" (UID: "330df925-8429-4b96-9bfe-caa017c21afa") : secret "marketplace-operator-metrics" not found Mar 18 13:07:56.890171 master-0 kubenswrapper[3938]: E0318 13:07:56.889725 3938 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:56.890171 master-0 kubenswrapper[3938]: E0318 13:07:56.889742 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:08:04.889737485 +0000 UTC m=+143.425484290 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:56.890171 master-0 kubenswrapper[3938]: E0318 13:07:56.889749 3938 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:07:56.890171 master-0 kubenswrapper[3938]: E0318 13:07:56.889787 3938 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:56.890171 master-0 kubenswrapper[3938]: E0318 13:07:56.889806 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls podName:ee1eb80b-5a76-443f-a534-54d5bdc0c98a nodeName:}" failed. No retries permitted until 2026-03-18 13:08:04.889800827 +0000 UTC m=+143.425547632 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-jfdn5" (UID: "ee1eb80b-5a76-443f-a534-54d5bdc0c98a") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:56.890171 master-0 kubenswrapper[3938]: E0318 13:07:56.889842 3938 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:56.890171 master-0 kubenswrapper[3938]: I0318 13:07:56.889758 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:07:56.890171 master-0 kubenswrapper[3938]: E0318 13:07:56.889863 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs podName:906c0fd3-3bcd-4c6c-8505-b3517bae06b4 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:04.889833738 +0000 UTC m=+143.425580583 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zvsmb" (UID: "906c0fd3-3bcd-4c6c-8505-b3517bae06b4") : secret "multus-admission-controller-secret" not found Mar 18 13:07:56.890171 master-0 kubenswrapper[3938]: I0318 13:07:56.889896 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:56.890171 master-0 kubenswrapper[3938]: E0318 13:07:56.889914 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls podName:da6a763d-2777-40c4-ae1f-c77ced406ea2 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:04.889897209 +0000 UTC m=+143.425644144 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls") pod "dns-operator-9c5679d8f-bqbzx" (UID: "da6a763d-2777-40c4-ae1f-c77ced406ea2") : secret "metrics-tls" not found Mar 18 13:07:56.890171 master-0 kubenswrapper[3938]: E0318 13:07:56.889932 3938 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:56.890171 master-0 kubenswrapper[3938]: I0318 13:07:56.889997 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:07:56.890171 master-0 kubenswrapper[3938]: E0318 13:07:56.890102 3938 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:56.890171 master-0 kubenswrapper[3938]: E0318 13:07:56.890142 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:08:04.890132546 +0000 UTC m=+143.425879351 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:56.890753 master-0 kubenswrapper[3938]: I0318 13:07:56.890137 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:07:56.890753 master-0 kubenswrapper[3938]: E0318 13:07:56.890157 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:08:04.890150216 +0000 UTC m=+143.425897021 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:56.890753 master-0 kubenswrapper[3938]: E0318 13:07:56.890223 3938 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:07:56.890753 master-0 kubenswrapper[3938]: E0318 13:07:56.890272 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls podName:73c93ee3-cf14-4fea-b2a7-ccfb56e55be4 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:04.890260469 +0000 UTC m=+143.426007384 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-n995f" (UID: "73c93ee3-cf14-4fea-b2a7-ccfb56e55be4") : secret "image-registry-operator-tls" not found Mar 18 13:07:56.991038 master-0 kubenswrapper[3938]: I0318 13:07:56.990984 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:07:56.991321 master-0 kubenswrapper[3938]: E0318 13:07:56.991261 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:07:56.991399 master-0 kubenswrapper[3938]: E0318 13:07:56.991371 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert podName:36db10b8-33a2-4b54-85e2-9809eb6bc37d nodeName:}" failed. No retries permitted until 2026-03-18 13:08:04.991352518 +0000 UTC m=+143.527099393 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-kbpvr" (UID: "36db10b8-33a2-4b54-85e2-9809eb6bc37d") : secret "package-server-manager-serving-cert" not found Mar 18 13:07:57.878877 master-0 kubenswrapper[3938]: I0318 13:07:57.878827 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" event={"ID":"c2c4572e-0b38-4db1-96e5-6a35e29048e7","Type":"ContainerStarted","Data":"d02c6c3cdba1a1883c0637cac9a306051c4ef216e0033461edc5cc690bbb087e"} Mar 18 13:08:02.410751 master-0 kubenswrapper[3938]: I0318 13:08:02.410219 3938 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:02.900419 master-0 kubenswrapper[3938]: I0318 13:08:02.898339 3938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" podStartSLOduration=104.898325866 podStartE2EDuration="1m44.898325866s" podCreationTimestamp="2026-03-18 13:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:08:02.897593076 +0000 UTC m=+141.433339881" watchObservedRunningTime="2026-03-18 13:08:02.898325866 +0000 UTC m=+141.434072671" Mar 18 13:08:03.908532 master-0 kubenswrapper[3938]: I0318 13:08:03.908462 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" event={"ID":"8ce8e99d-7b02-4bf4-a438-adde851918cb","Type":"ContainerStarted","Data":"f140128413a59472c05ccbf8a67ba06b17c2bdd86a6d5881d2c8c4864d65b7ae"} Mar 18 13:08:04.079351 master-0 kubenswrapper[3938]: I0318 13:08:04.079286 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv9m7\" (UniqueName: \"kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7\") pod \"network-check-target-zlgkc\" (UID: \"2cad2401-dab1-49f7-870e-a742ebfe323f\") " pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:08:04.087724 master-0 kubenswrapper[3938]: I0318 13:08:04.087380 3938 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv9m7\" (UniqueName: \"kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7\") pod \"network-check-target-zlgkc\" (UID: \"2cad2401-dab1-49f7-870e-a742ebfe323f\") " pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:08:04.257817 master-0 kubenswrapper[3938]: I0318 13:08:04.257699 3938 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:08:04.799287 master-0 kubenswrapper[3938]: I0318 13:08:04.799195 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:08:04.799548 master-0 kubenswrapper[3938]: E0318 13:08:04.799399 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:08:04.799548 master-0 kubenswrapper[3938]: E0318 13:08:04.799493 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert podName:35925474-e3fe-4cff-aad6-d853816618c7 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.799446759 +0000 UTC m=+159.335193564 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert") pod "olm-operator-5c9796789-8r4hr" (UID: "35925474-e3fe-4cff-aad6-d853816618c7") : secret "olm-operator-serving-cert" not found Mar 18 13:08:04.900494 master-0 kubenswrapper[3938]: I0318 13:08:04.900430 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:08:04.900494 master-0 kubenswrapper[3938]: I0318 13:08:04.900495 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:04.900734 master-0 kubenswrapper[3938]: E0318 13:08:04.900663 3938 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:08:04.900774 master-0 kubenswrapper[3938]: I0318 13:08:04.900710 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:04.900804 master-0 kubenswrapper[3938]: E0318 13:08:04.900749 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls podName:ee1eb80b-5a76-443f-a534-54d5bdc0c98a nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.900729384 +0000 UTC m=+159.436476269 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-jfdn5" (UID: "ee1eb80b-5a76-443f-a534-54d5bdc0c98a") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:08:04.900840 master-0 kubenswrapper[3938]: E0318 13:08:04.900798 3938 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:08:04.900840 master-0 kubenswrapper[3938]: E0318 13:08:04.900814 3938 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:08:04.900840 master-0 kubenswrapper[3938]: I0318 13:08:04.900817 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:08:04.900920 master-0 kubenswrapper[3938]: E0318 13:08:04.900850 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.900833617 +0000 UTC m=+159.436580512 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:08:04.900920 master-0 kubenswrapper[3938]: E0318 13:08:04.900865 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.900859298 +0000 UTC m=+159.436606103 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:08:04.900920 master-0 kubenswrapper[3938]: E0318 13:08:04.900902 3938 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:08:04.901032 master-0 kubenswrapper[3938]: E0318 13:08:04.900969 3938 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:08:04.901032 master-0 kubenswrapper[3938]: E0318 13:08:04.900982 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls podName:73c93ee3-cf14-4fea-b2a7-ccfb56e55be4 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.900965361 +0000 UTC m=+159.436712166 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-n995f" (UID: "73c93ee3-cf14-4fea-b2a7-ccfb56e55be4") : secret "image-registry-operator-tls" not found Mar 18 13:08:04.901032 master-0 kubenswrapper[3938]: I0318 13:08:04.900900 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:08:04.901032 master-0 kubenswrapper[3938]: E0318 13:08:04.901002 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs podName:906c0fd3-3bcd-4c6c-8505-b3517bae06b4 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.900992151 +0000 UTC m=+159.436739026 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zvsmb" (UID: "906c0fd3-3bcd-4c6c-8505-b3517bae06b4") : secret "multus-admission-controller-secret" not found Mar 18 13:08:04.901032 master-0 kubenswrapper[3938]: I0318 13:08:04.901028 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:08:04.901166 master-0 kubenswrapper[3938]: I0318 13:08:04.901055 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:08:04.901166 master-0 kubenswrapper[3938]: I0318 13:08:04.901095 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:08:04.901166 master-0 kubenswrapper[3938]: I0318 13:08:04.901128 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:04.901166 master-0 kubenswrapper[3938]: I0318 13:08:04.901158 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:08:04.901268 master-0 kubenswrapper[3938]: E0318 13:08:04.901189 3938 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:08:04.901268 master-0 kubenswrapper[3938]: I0318 13:08:04.901202 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:04.901268 master-0 kubenswrapper[3938]: E0318 13:08:04.901225 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls podName:f2b92a53-0b61-4e1d-a306-f9a498e48b38 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.901211698 +0000 UTC m=+159.436958593 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls") pod "ingress-operator-66b84d69b-xwqsb" (UID: "f2b92a53-0b61-4e1d-a306-f9a498e48b38") : secret "metrics-tls" not found Mar 18 13:08:04.901348 master-0 kubenswrapper[3938]: E0318 13:08:04.901279 3938 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:08:04.901348 master-0 kubenswrapper[3938]: E0318 13:08:04.901300 3938 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:08:04.901348 master-0 kubenswrapper[3938]: E0318 13:08:04.901306 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics podName:330df925-8429-4b96-9bfe-caa017c21afa nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.90129731 +0000 UTC m=+159.437044275 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-4v84b" (UID: "330df925-8429-4b96-9bfe-caa017c21afa") : secret "marketplace-operator-metrics" not found Mar 18 13:08:04.901437 master-0 kubenswrapper[3938]: E0318 13:08:04.901359 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.901350031 +0000 UTC m=+159.437096916 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:08:04.901437 master-0 kubenswrapper[3938]: E0318 13:08:04.901382 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:08:04.901437 master-0 kubenswrapper[3938]: E0318 13:08:04.901409 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert podName:47f82c03-65d1-4a6c-ba09-8a00ae778009 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.901400953 +0000 UTC m=+159.437147858 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert") pod "catalog-operator-68f85b4d6c-p9k56" (UID: "47f82c03-65d1-4a6c-ba09-8a00ae778009") : secret "catalog-operator-serving-cert" not found Mar 18 13:08:04.901519 master-0 kubenswrapper[3938]: E0318 13:08:04.901445 3938 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:08:04.901519 master-0 kubenswrapper[3938]: E0318 13:08:04.901470 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.901461105 +0000 UTC m=+159.437208010 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "node-tuning-operator-tls" not found Mar 18 13:08:04.901519 master-0 kubenswrapper[3938]: E0318 13:08:04.901511 3938 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:08:04.901599 master-0 kubenswrapper[3938]: E0318 13:08:04.901534 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls podName:da6a763d-2777-40c4-ae1f-c77ced406ea2 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.901526166 +0000 UTC m=+159.437273081 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls") pod "dns-operator-9c5679d8f-bqbzx" (UID: "da6a763d-2777-40c4-ae1f-c77ced406ea2") : secret "metrics-tls" not found Mar 18 13:08:04.931382 master-0 kubenswrapper[3938]: I0318 13:08:04.931317 3938 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-zlgkc"] Mar 18 13:08:04.938050 master-0 kubenswrapper[3938]: W0318 13:08:04.937955 3938 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cad2401_dab1_49f7_870e_a742ebfe323f.slice/crio-fac381b9cc8f57cf484158ad2ce85bb2e249b3b6d72791568b3a2df54f9f4083 WatchSource:0}: Error finding container fac381b9cc8f57cf484158ad2ce85bb2e249b3b6d72791568b3a2df54f9f4083: Status 404 returned error can't find the container with id fac381b9cc8f57cf484158ad2ce85bb2e249b3b6d72791568b3a2df54f9f4083 Mar 18 13:08:05.003714 master-0 kubenswrapper[3938]: I0318 13:08:05.002908 3938 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:08:05.003824 master-0 kubenswrapper[3938]: E0318 13:08:05.003756 3938 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:08:05.003824 master-0 kubenswrapper[3938]: E0318 13:08:05.003806 3938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert podName:36db10b8-33a2-4b54-85e2-9809eb6bc37d nodeName:}" failed. No retries permitted until 2026-03-18 13:08:21.003789878 +0000 UTC m=+159.539536683 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-kbpvr" (UID: "36db10b8-33a2-4b54-85e2-9809eb6bc37d") : secret "package-server-manager-serving-cert" not found Mar 18 13:08:05.916463 master-0 kubenswrapper[3938]: I0318 13:08:05.916410 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-zlgkc" event={"ID":"2cad2401-dab1-49f7-870e-a742ebfe323f","Type":"ContainerStarted","Data":"213030e65a7e980471f809d19db9fca7258130148ddc6dc08ed7e5643a442dc7"} Mar 18 13:08:05.916463 master-0 kubenswrapper[3938]: I0318 13:08:05.916468 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-zlgkc" event={"ID":"2cad2401-dab1-49f7-870e-a742ebfe323f","Type":"ContainerStarted","Data":"fac381b9cc8f57cf484158ad2ce85bb2e249b3b6d72791568b3a2df54f9f4083"} Mar 18 13:08:05.916740 master-0 kubenswrapper[3938]: I0318 13:08:05.916607 3938 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:08:05.970066 master-0 kubenswrapper[3938]: I0318 13:08:05.966161 3938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" podStartSLOduration=98.672575401 podStartE2EDuration="1m48.96613936s" podCreationTimestamp="2026-03-18 13:06:17 +0000 UTC" firstStartedPulling="2026-03-18 13:07:52.909815164 +0000 UTC m=+131.445561979" lastFinishedPulling="2026-03-18 13:08:03.203379133 +0000 UTC m=+141.739125938" observedRunningTime="2026-03-18 13:08:04.961985838 +0000 UTC m=+143.497732653" watchObservedRunningTime="2026-03-18 13:08:05.96613936 +0000 UTC m=+144.501886165" Mar 18 13:08:05.970066 master-0 kubenswrapper[3938]: I0318 13:08:05.966802 3938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-zlgkc" podStartSLOduration=66.966796258 podStartE2EDuration="1m6.966796258s" podCreationTimestamp="2026-03-18 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:08:05.964758421 +0000 UTC m=+144.500505246" watchObservedRunningTime="2026-03-18 13:08:05.966796258 +0000 UTC m=+144.502543063" Mar 18 13:08:06.921436 master-0 kubenswrapper[3938]: I0318 13:08:06.921350 3938 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-tvnss" event={"ID":"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971","Type":"ContainerStarted","Data":"02333c9651065fc0d64b2c2b3fc99e2100e7a27095249ccd9ebd9445546c8246"} Mar 18 13:08:09.218012 master-0 kubenswrapper[3938]: I0318 13:08:09.210423 3938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-tvnss" podStartSLOduration=8.092104041 podStartE2EDuration="21.210409521s" podCreationTimestamp="2026-03-18 13:07:48 +0000 UTC" firstStartedPulling="2026-03-18 13:07:50.072769499 +0000 UTC m=+128.608516304" lastFinishedPulling="2026-03-18 13:08:03.191074979 +0000 UTC m=+141.726821784" observedRunningTime="2026-03-18 13:08:09.208887079 +0000 UTC m=+147.744633884" watchObservedRunningTime="2026-03-18 13:08:09.210409521 +0000 UTC m=+147.746156326" Mar 18 13:08:11.102975 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 18 13:08:11.118454 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 18 13:08:11.118739 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 18 13:08:11.120891 master-0 systemd[1]: kubelet.service: Consumed 9.916s CPU time. Mar 18 13:08:11.133159 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 13:08:11.235399 master-0 kubenswrapper[7146]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:08:11.236712 master-0 kubenswrapper[7146]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 13:08:11.236771 master-0 kubenswrapper[7146]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:08:11.236830 master-0 kubenswrapper[7146]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:08:11.236873 master-0 kubenswrapper[7146]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 13:08:11.236916 master-0 kubenswrapper[7146]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:08:11.237169 master-0 kubenswrapper[7146]: I0318 13:08:11.237085 7146 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 13:08:11.240705 master-0 kubenswrapper[7146]: W0318 13:08:11.240691 7146 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:08:11.240780 master-0 kubenswrapper[7146]: W0318 13:08:11.240771 7146 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:08:11.240867 master-0 kubenswrapper[7146]: W0318 13:08:11.240857 7146 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:08:11.240926 master-0 kubenswrapper[7146]: W0318 13:08:11.240917 7146 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:08:11.241017 master-0 kubenswrapper[7146]: W0318 13:08:11.241005 7146 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:08:11.241090 master-0 kubenswrapper[7146]: W0318 13:08:11.241079 7146 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:08:11.241150 master-0 kubenswrapper[7146]: W0318 13:08:11.241142 7146 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:08:11.241202 master-0 kubenswrapper[7146]: W0318 13:08:11.241194 7146 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:08:11.241248 master-0 kubenswrapper[7146]: W0318 13:08:11.241240 7146 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:08:11.241295 master-0 kubenswrapper[7146]: W0318 13:08:11.241287 7146 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:08:11.241343 master-0 kubenswrapper[7146]: W0318 13:08:11.241336 7146 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:08:11.241393 master-0 kubenswrapper[7146]: W0318 13:08:11.241384 7146 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:08:11.241451 master-0 kubenswrapper[7146]: W0318 13:08:11.241442 7146 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:08:11.241501 master-0 kubenswrapper[7146]: W0318 13:08:11.241493 7146 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:08:11.241565 master-0 kubenswrapper[7146]: W0318 13:08:11.241549 7146 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:08:11.241629 master-0 kubenswrapper[7146]: W0318 13:08:11.241620 7146 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:08:11.241691 master-0 kubenswrapper[7146]: W0318 13:08:11.241680 7146 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:08:11.242016 master-0 kubenswrapper[7146]: W0318 13:08:11.242004 7146 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:08:11.242100 master-0 kubenswrapper[7146]: W0318 13:08:11.242089 7146 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:08:11.242164 master-0 kubenswrapper[7146]: W0318 13:08:11.242153 7146 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:08:11.242228 master-0 kubenswrapper[7146]: W0318 13:08:11.242218 7146 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:08:11.242286 master-0 kubenswrapper[7146]: W0318 13:08:11.242277 7146 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:08:11.242335 master-0 kubenswrapper[7146]: W0318 13:08:11.242327 7146 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:08:11.242385 master-0 kubenswrapper[7146]: W0318 13:08:11.242375 7146 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:08:11.242455 master-0 kubenswrapper[7146]: W0318 13:08:11.242438 7146 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:08:11.242524 master-0 kubenswrapper[7146]: W0318 13:08:11.242514 7146 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:08:11.242580 master-0 kubenswrapper[7146]: W0318 13:08:11.242571 7146 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:08:11.242641 master-0 kubenswrapper[7146]: W0318 13:08:11.242630 7146 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:08:11.242701 master-0 kubenswrapper[7146]: W0318 13:08:11.242691 7146 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:08:11.242764 master-0 kubenswrapper[7146]: W0318 13:08:11.242754 7146 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:08:11.242823 master-0 kubenswrapper[7146]: W0318 13:08:11.242813 7146 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:08:11.242885 master-0 kubenswrapper[7146]: W0318 13:08:11.242875 7146 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:08:11.242971 master-0 kubenswrapper[7146]: W0318 13:08:11.242960 7146 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:08:11.243043 master-0 kubenswrapper[7146]: W0318 13:08:11.243033 7146 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:08:11.243100 master-0 kubenswrapper[7146]: W0318 13:08:11.243090 7146 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:08:11.243165 master-0 kubenswrapper[7146]: W0318 13:08:11.243156 7146 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:08:11.243223 master-0 kubenswrapper[7146]: W0318 13:08:11.243214 7146 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:08:11.243280 master-0 kubenswrapper[7146]: W0318 13:08:11.243269 7146 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:08:11.243369 master-0 kubenswrapper[7146]: W0318 13:08:11.243358 7146 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:08:11.243460 master-0 kubenswrapper[7146]: W0318 13:08:11.243450 7146 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:08:11.243515 master-0 kubenswrapper[7146]: W0318 13:08:11.243506 7146 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:08:11.243564 master-0 kubenswrapper[7146]: W0318 13:08:11.243556 7146 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:08:11.243611 master-0 kubenswrapper[7146]: W0318 13:08:11.243603 7146 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:08:11.243657 master-0 kubenswrapper[7146]: W0318 13:08:11.243649 7146 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:08:11.243702 master-0 kubenswrapper[7146]: W0318 13:08:11.243695 7146 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:08:11.243750 master-0 kubenswrapper[7146]: W0318 13:08:11.243742 7146 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:08:11.243802 master-0 kubenswrapper[7146]: W0318 13:08:11.243792 7146 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:08:11.243852 master-0 kubenswrapper[7146]: W0318 13:08:11.243845 7146 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:08:11.243905 master-0 kubenswrapper[7146]: W0318 13:08:11.243895 7146 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:08:11.243992 master-0 kubenswrapper[7146]: W0318 13:08:11.243981 7146 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:08:11.244049 master-0 kubenswrapper[7146]: W0318 13:08:11.244039 7146 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:08:11.244136 master-0 kubenswrapper[7146]: W0318 13:08:11.244125 7146 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:08:11.244200 master-0 kubenswrapper[7146]: W0318 13:08:11.244189 7146 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:08:11.244251 master-0 kubenswrapper[7146]: W0318 13:08:11.244243 7146 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:08:11.244302 master-0 kubenswrapper[7146]: W0318 13:08:11.244294 7146 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:08:11.244350 master-0 kubenswrapper[7146]: W0318 13:08:11.244341 7146 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:08:11.244402 master-0 kubenswrapper[7146]: W0318 13:08:11.244393 7146 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:08:11.244527 master-0 kubenswrapper[7146]: W0318 13:08:11.244448 7146 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:08:11.244603 master-0 kubenswrapper[7146]: W0318 13:08:11.244591 7146 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:08:11.244667 master-0 kubenswrapper[7146]: W0318 13:08:11.244658 7146 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:08:11.244721 master-0 kubenswrapper[7146]: W0318 13:08:11.244713 7146 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:08:11.244781 master-0 kubenswrapper[7146]: W0318 13:08:11.244768 7146 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:08:11.244842 master-0 kubenswrapper[7146]: W0318 13:08:11.244834 7146 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:08:11.244908 master-0 kubenswrapper[7146]: W0318 13:08:11.244897 7146 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:08:11.245000 master-0 kubenswrapper[7146]: W0318 13:08:11.244989 7146 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:08:11.245061 master-0 kubenswrapper[7146]: W0318 13:08:11.245052 7146 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:08:11.245121 master-0 kubenswrapper[7146]: W0318 13:08:11.245111 7146 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:08:11.245187 master-0 kubenswrapper[7146]: W0318 13:08:11.245176 7146 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:08:11.245248 master-0 kubenswrapper[7146]: W0318 13:08:11.245239 7146 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:08:11.245305 master-0 kubenswrapper[7146]: W0318 13:08:11.245295 7146 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:08:11.245373 master-0 kubenswrapper[7146]: W0318 13:08:11.245362 7146 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:08:11.245457 master-0 kubenswrapper[7146]: W0318 13:08:11.245447 7146 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:08:11.245628 master-0 kubenswrapper[7146]: I0318 13:08:11.245612 7146 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 13:08:11.245704 master-0 kubenswrapper[7146]: I0318 13:08:11.245688 7146 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 13:08:11.245765 master-0 kubenswrapper[7146]: I0318 13:08:11.245753 7146 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 13:08:11.245825 master-0 kubenswrapper[7146]: I0318 13:08:11.245812 7146 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 13:08:11.245886 master-0 kubenswrapper[7146]: I0318 13:08:11.245875 7146 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 13:08:11.245973 master-0 kubenswrapper[7146]: I0318 13:08:11.245958 7146 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 13:08:11.246043 master-0 kubenswrapper[7146]: I0318 13:08:11.246032 7146 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 13:08:11.246097 master-0 kubenswrapper[7146]: I0318 13:08:11.246088 7146 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 13:08:11.246149 master-0 kubenswrapper[7146]: I0318 13:08:11.246140 7146 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 13:08:11.246202 master-0 kubenswrapper[7146]: I0318 13:08:11.246191 7146 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 13:08:11.246262 master-0 kubenswrapper[7146]: I0318 13:08:11.246251 7146 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 13:08:11.246312 master-0 kubenswrapper[7146]: I0318 13:08:11.246303 7146 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 13:08:11.246364 master-0 kubenswrapper[7146]: I0318 13:08:11.246354 7146 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 13:08:11.246429 master-0 kubenswrapper[7146]: I0318 13:08:11.246417 7146 flags.go:64] FLAG: --cgroup-root="" Mar 18 13:08:11.246485 master-0 kubenswrapper[7146]: I0318 13:08:11.246475 7146 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 13:08:11.246542 master-0 kubenswrapper[7146]: I0318 13:08:11.246531 7146 flags.go:64] FLAG: --client-ca-file="" Mar 18 13:08:11.246596 master-0 kubenswrapper[7146]: I0318 13:08:11.246587 7146 flags.go:64] FLAG: --cloud-config="" Mar 18 13:08:11.246643 master-0 kubenswrapper[7146]: I0318 13:08:11.246635 7146 flags.go:64] FLAG: --cloud-provider="" Mar 18 13:08:11.246707 master-0 kubenswrapper[7146]: I0318 13:08:11.246692 7146 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 13:08:11.246777 master-0 kubenswrapper[7146]: I0318 13:08:11.246766 7146 flags.go:64] FLAG: --cluster-domain="" Mar 18 13:08:11.246827 master-0 kubenswrapper[7146]: I0318 13:08:11.246818 7146 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 13:08:11.246876 master-0 kubenswrapper[7146]: I0318 13:08:11.246866 7146 flags.go:64] FLAG: --config-dir="" Mar 18 13:08:11.246934 master-0 kubenswrapper[7146]: I0318 13:08:11.246923 7146 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 13:08:11.247021 master-0 kubenswrapper[7146]: I0318 13:08:11.247010 7146 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 13:08:11.247077 master-0 kubenswrapper[7146]: I0318 13:08:11.247068 7146 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 13:08:11.247135 master-0 kubenswrapper[7146]: I0318 13:08:11.247123 7146 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 13:08:11.247198 master-0 kubenswrapper[7146]: I0318 13:08:11.247188 7146 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 13:08:11.247250 master-0 kubenswrapper[7146]: I0318 13:08:11.247241 7146 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 13:08:11.247362 master-0 kubenswrapper[7146]: I0318 13:08:11.247350 7146 flags.go:64] FLAG: --contention-profiling="false" Mar 18 13:08:11.247417 master-0 kubenswrapper[7146]: I0318 13:08:11.247409 7146 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 13:08:11.247466 master-0 kubenswrapper[7146]: I0318 13:08:11.247457 7146 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 13:08:11.247516 master-0 kubenswrapper[7146]: I0318 13:08:11.247507 7146 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 13:08:11.247597 master-0 kubenswrapper[7146]: I0318 13:08:11.247561 7146 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 13:08:11.247655 master-0 kubenswrapper[7146]: I0318 13:08:11.247645 7146 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 13:08:11.247721 master-0 kubenswrapper[7146]: I0318 13:08:11.247709 7146 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 13:08:11.247787 master-0 kubenswrapper[7146]: I0318 13:08:11.247775 7146 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 13:08:11.247841 master-0 kubenswrapper[7146]: I0318 13:08:11.247833 7146 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 13:08:11.247891 master-0 kubenswrapper[7146]: I0318 13:08:11.247883 7146 flags.go:64] FLAG: --enable-server="true" Mar 18 13:08:11.248219 master-0 kubenswrapper[7146]: I0318 13:08:11.248205 7146 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 13:08:11.248279 master-0 kubenswrapper[7146]: I0318 13:08:11.248271 7146 flags.go:64] FLAG: --event-burst="100" Mar 18 13:08:11.248331 master-0 kubenswrapper[7146]: I0318 13:08:11.248322 7146 flags.go:64] FLAG: --event-qps="50" Mar 18 13:08:11.248378 master-0 kubenswrapper[7146]: I0318 13:08:11.248369 7146 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 13:08:11.248447 master-0 kubenswrapper[7146]: I0318 13:08:11.248438 7146 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 13:08:11.248500 master-0 kubenswrapper[7146]: I0318 13:08:11.248490 7146 flags.go:64] FLAG: --eviction-hard="" Mar 18 13:08:11.248548 master-0 kubenswrapper[7146]: I0318 13:08:11.248539 7146 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 13:08:11.248725 master-0 kubenswrapper[7146]: I0318 13:08:11.248714 7146 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 13:08:11.248784 master-0 kubenswrapper[7146]: I0318 13:08:11.248775 7146 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 13:08:11.248833 master-0 kubenswrapper[7146]: I0318 13:08:11.248824 7146 flags.go:64] FLAG: --eviction-soft="" Mar 18 13:08:11.248881 master-0 kubenswrapper[7146]: I0318 13:08:11.248873 7146 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 13:08:11.248929 master-0 kubenswrapper[7146]: I0318 13:08:11.248921 7146 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 13:08:11.248988 master-0 kubenswrapper[7146]: I0318 13:08:11.248979 7146 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 13:08:11.249040 master-0 kubenswrapper[7146]: I0318 13:08:11.249032 7146 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 13:08:11.249089 master-0 kubenswrapper[7146]: I0318 13:08:11.249081 7146 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 13:08:11.249138 master-0 kubenswrapper[7146]: I0318 13:08:11.249130 7146 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 13:08:11.249187 master-0 kubenswrapper[7146]: I0318 13:08:11.249173 7146 flags.go:64] FLAG: --feature-gates="" Mar 18 13:08:11.249234 master-0 kubenswrapper[7146]: I0318 13:08:11.249225 7146 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 13:08:11.249277 master-0 kubenswrapper[7146]: I0318 13:08:11.249269 7146 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 13:08:11.249326 master-0 kubenswrapper[7146]: I0318 13:08:11.249317 7146 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 13:08:11.249374 master-0 kubenswrapper[7146]: I0318 13:08:11.249365 7146 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 13:08:11.249421 master-0 kubenswrapper[7146]: I0318 13:08:11.249412 7146 flags.go:64] FLAG: --healthz-port="10248" Mar 18 13:08:11.249469 master-0 kubenswrapper[7146]: I0318 13:08:11.249461 7146 flags.go:64] FLAG: --help="false" Mar 18 13:08:11.249516 master-0 kubenswrapper[7146]: I0318 13:08:11.249508 7146 flags.go:64] FLAG: --hostname-override="" Mar 18 13:08:11.249573 master-0 kubenswrapper[7146]: I0318 13:08:11.249562 7146 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 13:08:11.249627 master-0 kubenswrapper[7146]: I0318 13:08:11.249618 7146 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 13:08:11.249679 master-0 kubenswrapper[7146]: I0318 13:08:11.249670 7146 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 13:08:11.249725 master-0 kubenswrapper[7146]: I0318 13:08:11.249717 7146 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 13:08:11.249773 master-0 kubenswrapper[7146]: I0318 13:08:11.249764 7146 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 13:08:11.249834 master-0 kubenswrapper[7146]: I0318 13:08:11.249825 7146 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 13:08:11.249883 master-0 kubenswrapper[7146]: I0318 13:08:11.249875 7146 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 13:08:11.249951 master-0 kubenswrapper[7146]: I0318 13:08:11.249925 7146 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 13:08:11.250007 master-0 kubenswrapper[7146]: I0318 13:08:11.249998 7146 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 13:08:11.250196 master-0 kubenswrapper[7146]: I0318 13:08:11.250181 7146 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 13:08:11.250316 master-0 kubenswrapper[7146]: I0318 13:08:11.250305 7146 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 13:08:11.250375 master-0 kubenswrapper[7146]: I0318 13:08:11.250367 7146 flags.go:64] FLAG: --kube-reserved="" Mar 18 13:08:11.250420 master-0 kubenswrapper[7146]: I0318 13:08:11.250412 7146 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 13:08:11.250482 master-0 kubenswrapper[7146]: I0318 13:08:11.250470 7146 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 13:08:11.250546 master-0 kubenswrapper[7146]: I0318 13:08:11.250536 7146 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 13:08:11.250598 master-0 kubenswrapper[7146]: I0318 13:08:11.250589 7146 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 13:08:11.250645 master-0 kubenswrapper[7146]: I0318 13:08:11.250637 7146 flags.go:64] FLAG: --lock-file="" Mar 18 13:08:11.250688 master-0 kubenswrapper[7146]: I0318 13:08:11.250680 7146 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 13:08:11.250740 master-0 kubenswrapper[7146]: I0318 13:08:11.250731 7146 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 13:08:11.250831 master-0 kubenswrapper[7146]: I0318 13:08:11.250814 7146 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 13:08:11.250884 master-0 kubenswrapper[7146]: I0318 13:08:11.250875 7146 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 13:08:11.250931 master-0 kubenswrapper[7146]: I0318 13:08:11.250923 7146 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 13:08:11.251004 master-0 kubenswrapper[7146]: I0318 13:08:11.250992 7146 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 13:08:11.251074 master-0 kubenswrapper[7146]: I0318 13:08:11.251065 7146 flags.go:64] FLAG: --logging-format="text" Mar 18 13:08:11.251129 master-0 kubenswrapper[7146]: I0318 13:08:11.251119 7146 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 13:08:11.251177 master-0 kubenswrapper[7146]: I0318 13:08:11.251169 7146 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 13:08:11.251229 master-0 kubenswrapper[7146]: I0318 13:08:11.251219 7146 flags.go:64] FLAG: --manifest-url="" Mar 18 13:08:11.251282 master-0 kubenswrapper[7146]: I0318 13:08:11.251271 7146 flags.go:64] FLAG: --manifest-url-header="" Mar 18 13:08:11.251358 master-0 kubenswrapper[7146]: I0318 13:08:11.251348 7146 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 13:08:11.251412 master-0 kubenswrapper[7146]: I0318 13:08:11.251402 7146 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 13:08:11.251458 master-0 kubenswrapper[7146]: I0318 13:08:11.251449 7146 flags.go:64] FLAG: --max-pods="110" Mar 18 13:08:11.251505 master-0 kubenswrapper[7146]: I0318 13:08:11.251497 7146 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 13:08:11.251552 master-0 kubenswrapper[7146]: I0318 13:08:11.251544 7146 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 13:08:11.251601 master-0 kubenswrapper[7146]: I0318 13:08:11.251592 7146 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 13:08:11.251648 master-0 kubenswrapper[7146]: I0318 13:08:11.251640 7146 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 13:08:11.251692 master-0 kubenswrapper[7146]: I0318 13:08:11.251684 7146 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 13:08:11.251735 master-0 kubenswrapper[7146]: I0318 13:08:11.251727 7146 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 13:08:11.251795 master-0 kubenswrapper[7146]: I0318 13:08:11.251776 7146 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 13:08:11.251913 master-0 kubenswrapper[7146]: I0318 13:08:11.251904 7146 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 13:08:11.251985 master-0 kubenswrapper[7146]: I0318 13:08:11.251975 7146 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 13:08:11.252039 master-0 kubenswrapper[7146]: I0318 13:08:11.252030 7146 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 13:08:11.252087 master-0 kubenswrapper[7146]: I0318 13:08:11.252078 7146 flags.go:64] FLAG: --pod-cidr="" Mar 18 13:08:11.252138 master-0 kubenswrapper[7146]: I0318 13:08:11.252125 7146 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 13:08:11.252186 master-0 kubenswrapper[7146]: I0318 13:08:11.252178 7146 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 13:08:11.252234 master-0 kubenswrapper[7146]: I0318 13:08:11.252226 7146 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 13:08:11.252281 master-0 kubenswrapper[7146]: I0318 13:08:11.252273 7146 flags.go:64] FLAG: --pods-per-core="0" Mar 18 13:08:11.252332 master-0 kubenswrapper[7146]: I0318 13:08:11.252324 7146 flags.go:64] FLAG: --port="10250" Mar 18 13:08:11.252381 master-0 kubenswrapper[7146]: I0318 13:08:11.252372 7146 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 13:08:11.252427 master-0 kubenswrapper[7146]: I0318 13:08:11.252420 7146 flags.go:64] FLAG: --provider-id="" Mar 18 13:08:11.252475 master-0 kubenswrapper[7146]: I0318 13:08:11.252467 7146 flags.go:64] FLAG: --qos-reserved="" Mar 18 13:08:11.252522 master-0 kubenswrapper[7146]: I0318 13:08:11.252514 7146 flags.go:64] FLAG: --read-only-port="10255" Mar 18 13:08:11.252569 master-0 kubenswrapper[7146]: I0318 13:08:11.252561 7146 flags.go:64] FLAG: --register-node="true" Mar 18 13:08:11.252616 master-0 kubenswrapper[7146]: I0318 13:08:11.252608 7146 flags.go:64] FLAG: --register-schedulable="true" Mar 18 13:08:11.252667 master-0 kubenswrapper[7146]: I0318 13:08:11.252655 7146 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 13:08:11.252718 master-0 kubenswrapper[7146]: I0318 13:08:11.252710 7146 flags.go:64] FLAG: --registry-burst="10" Mar 18 13:08:11.252765 master-0 kubenswrapper[7146]: I0318 13:08:11.252757 7146 flags.go:64] FLAG: --registry-qps="5" Mar 18 13:08:11.252818 master-0 kubenswrapper[7146]: I0318 13:08:11.252810 7146 flags.go:64] FLAG: --reserved-cpus="" Mar 18 13:08:11.252869 master-0 kubenswrapper[7146]: I0318 13:08:11.252859 7146 flags.go:64] FLAG: --reserved-memory="" Mar 18 13:08:11.252923 master-0 kubenswrapper[7146]: I0318 13:08:11.252914 7146 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 13:08:11.253005 master-0 kubenswrapper[7146]: I0318 13:08:11.252996 7146 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 13:08:11.253050 master-0 kubenswrapper[7146]: I0318 13:08:11.253042 7146 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 13:08:11.253156 master-0 kubenswrapper[7146]: I0318 13:08:11.253146 7146 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 13:08:11.253205 master-0 kubenswrapper[7146]: I0318 13:08:11.253197 7146 flags.go:64] FLAG: --runonce="false" Mar 18 13:08:11.253252 master-0 kubenswrapper[7146]: I0318 13:08:11.253244 7146 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 13:08:11.253300 master-0 kubenswrapper[7146]: I0318 13:08:11.253291 7146 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 13:08:11.253343 master-0 kubenswrapper[7146]: I0318 13:08:11.253335 7146 flags.go:64] FLAG: --seccomp-default="false" Mar 18 13:08:11.253386 master-0 kubenswrapper[7146]: I0318 13:08:11.253378 7146 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 13:08:11.253434 master-0 kubenswrapper[7146]: I0318 13:08:11.253425 7146 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 13:08:11.253482 master-0 kubenswrapper[7146]: I0318 13:08:11.253474 7146 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 13:08:11.253532 master-0 kubenswrapper[7146]: I0318 13:08:11.253523 7146 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 13:08:11.253575 master-0 kubenswrapper[7146]: I0318 13:08:11.253567 7146 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 13:08:11.253621 master-0 kubenswrapper[7146]: I0318 13:08:11.253613 7146 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 13:08:11.253671 master-0 kubenswrapper[7146]: I0318 13:08:11.253662 7146 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 13:08:11.253719 master-0 kubenswrapper[7146]: I0318 13:08:11.253711 7146 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 13:08:11.253763 master-0 kubenswrapper[7146]: I0318 13:08:11.253755 7146 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 13:08:11.253806 master-0 kubenswrapper[7146]: I0318 13:08:11.253798 7146 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 13:08:11.253855 master-0 kubenswrapper[7146]: I0318 13:08:11.253847 7146 flags.go:64] FLAG: --system-cgroups="" Mar 18 13:08:11.253907 master-0 kubenswrapper[7146]: I0318 13:08:11.253894 7146 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 13:08:11.254073 master-0 kubenswrapper[7146]: I0318 13:08:11.254063 7146 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 13:08:11.254122 master-0 kubenswrapper[7146]: I0318 13:08:11.254114 7146 flags.go:64] FLAG: --tls-cert-file="" Mar 18 13:08:11.254168 master-0 kubenswrapper[7146]: I0318 13:08:11.254157 7146 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 13:08:11.254221 master-0 kubenswrapper[7146]: I0318 13:08:11.254212 7146 flags.go:64] FLAG: --tls-min-version="" Mar 18 13:08:11.254270 master-0 kubenswrapper[7146]: I0318 13:08:11.254261 7146 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 13:08:11.254320 master-0 kubenswrapper[7146]: I0318 13:08:11.254311 7146 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 13:08:11.254370 master-0 kubenswrapper[7146]: I0318 13:08:11.254361 7146 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 13:08:11.254418 master-0 kubenswrapper[7146]: I0318 13:08:11.254410 7146 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 13:08:11.254471 master-0 kubenswrapper[7146]: I0318 13:08:11.254460 7146 flags.go:64] FLAG: --v="2" Mar 18 13:08:11.254522 master-0 kubenswrapper[7146]: I0318 13:08:11.254512 7146 flags.go:64] FLAG: --version="false" Mar 18 13:08:11.254571 master-0 kubenswrapper[7146]: I0318 13:08:11.254561 7146 flags.go:64] FLAG: --vmodule="" Mar 18 13:08:11.254617 master-0 kubenswrapper[7146]: I0318 13:08:11.254609 7146 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 13:08:11.254667 master-0 kubenswrapper[7146]: I0318 13:08:11.254658 7146 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 13:08:11.254835 master-0 kubenswrapper[7146]: W0318 13:08:11.254826 7146 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:08:11.254888 master-0 kubenswrapper[7146]: W0318 13:08:11.254881 7146 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:08:11.254956 master-0 kubenswrapper[7146]: W0318 13:08:11.254947 7146 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:08:11.255011 master-0 kubenswrapper[7146]: W0318 13:08:11.255002 7146 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:08:11.255096 master-0 kubenswrapper[7146]: W0318 13:08:11.255050 7146 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:08:11.255164 master-0 kubenswrapper[7146]: W0318 13:08:11.255156 7146 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:08:11.255215 master-0 kubenswrapper[7146]: W0318 13:08:11.255207 7146 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:08:11.255264 master-0 kubenswrapper[7146]: W0318 13:08:11.255256 7146 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:08:11.255311 master-0 kubenswrapper[7146]: W0318 13:08:11.255303 7146 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:08:11.255388 master-0 kubenswrapper[7146]: W0318 13:08:11.255379 7146 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:08:11.255435 master-0 kubenswrapper[7146]: W0318 13:08:11.255428 7146 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:08:11.255483 master-0 kubenswrapper[7146]: W0318 13:08:11.255475 7146 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:08:11.255533 master-0 kubenswrapper[7146]: W0318 13:08:11.255525 7146 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:08:11.255581 master-0 kubenswrapper[7146]: W0318 13:08:11.255574 7146 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:08:11.255623 master-0 kubenswrapper[7146]: W0318 13:08:11.255616 7146 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:08:11.255665 master-0 kubenswrapper[7146]: W0318 13:08:11.255658 7146 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:08:11.255706 master-0 kubenswrapper[7146]: W0318 13:08:11.255699 7146 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:08:11.255747 master-0 kubenswrapper[7146]: W0318 13:08:11.255740 7146 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:08:11.255792 master-0 kubenswrapper[7146]: W0318 13:08:11.255785 7146 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:08:11.255841 master-0 kubenswrapper[7146]: W0318 13:08:11.255833 7146 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:08:11.255890 master-0 kubenswrapper[7146]: W0318 13:08:11.255882 7146 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:08:11.255947 master-0 kubenswrapper[7146]: W0318 13:08:11.255925 7146 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:08:11.255994 master-0 kubenswrapper[7146]: W0318 13:08:11.255986 7146 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:08:11.256047 master-0 kubenswrapper[7146]: W0318 13:08:11.256040 7146 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:08:11.256094 master-0 kubenswrapper[7146]: W0318 13:08:11.256087 7146 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:08:11.256137 master-0 kubenswrapper[7146]: W0318 13:08:11.256130 7146 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:08:11.256178 master-0 kubenswrapper[7146]: W0318 13:08:11.256171 7146 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:08:11.256307 master-0 kubenswrapper[7146]: W0318 13:08:11.256298 7146 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:08:11.256355 master-0 kubenswrapper[7146]: W0318 13:08:11.256348 7146 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:08:11.256403 master-0 kubenswrapper[7146]: W0318 13:08:11.256395 7146 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256437 7146 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256446 7146 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256451 7146 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256456 7146 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256460 7146 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256464 7146 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256470 7146 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256475 7146 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256479 7146 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256483 7146 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256488 7146 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256492 7146 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256496 7146 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256501 7146 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256505 7146 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256509 7146 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256513 7146 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256518 7146 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:08:11.256996 master-0 kubenswrapper[7146]: W0318 13:08:11.256522 7146 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256526 7146 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256555 7146 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256559 7146 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256563 7146 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256567 7146 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256571 7146 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256575 7146 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256578 7146 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256582 7146 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256586 7146 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256589 7146 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256593 7146 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256597 7146 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256600 7146 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256606 7146 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256610 7146 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256614 7146 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256617 7146 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256621 7146 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:08:11.257545 master-0 kubenswrapper[7146]: W0318 13:08:11.256625 7146 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:08:11.258021 master-0 kubenswrapper[7146]: W0318 13:08:11.256628 7146 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:08:11.258021 master-0 kubenswrapper[7146]: W0318 13:08:11.256632 7146 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:08:11.258021 master-0 kubenswrapper[7146]: W0318 13:08:11.256635 7146 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:08:11.258021 master-0 kubenswrapper[7146]: I0318 13:08:11.256649 7146 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:08:11.265045 master-0 kubenswrapper[7146]: I0318 13:08:11.264925 7146 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 13:08:11.265045 master-0 kubenswrapper[7146]: I0318 13:08:11.265043 7146 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 13:08:11.265260 master-0 kubenswrapper[7146]: W0318 13:08:11.265235 7146 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:08:11.265260 master-0 kubenswrapper[7146]: W0318 13:08:11.265253 7146 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:08:11.265260 master-0 kubenswrapper[7146]: W0318 13:08:11.265259 7146 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:08:11.265345 master-0 kubenswrapper[7146]: W0318 13:08:11.265266 7146 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:08:11.265345 master-0 kubenswrapper[7146]: W0318 13:08:11.265274 7146 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:08:11.265345 master-0 kubenswrapper[7146]: W0318 13:08:11.265279 7146 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:08:11.265345 master-0 kubenswrapper[7146]: W0318 13:08:11.265339 7146 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:08:11.265345 master-0 kubenswrapper[7146]: W0318 13:08:11.265346 7146 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265352 7146 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265357 7146 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265362 7146 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265367 7146 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265373 7146 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265378 7146 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265384 7146 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265389 7146 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265394 7146 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265398 7146 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265403 7146 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265408 7146 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265413 7146 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265417 7146 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265422 7146 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265427 7146 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265432 7146 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265436 7146 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:08:11.265461 master-0 kubenswrapper[7146]: W0318 13:08:11.265440 7146 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265445 7146 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265495 7146 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265502 7146 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265506 7146 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265511 7146 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265515 7146 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265520 7146 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265528 7146 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265533 7146 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265537 7146 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265542 7146 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265546 7146 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265551 7146 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265556 7146 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265560 7146 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265565 7146 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265569 7146 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265574 7146 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265578 7146 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:08:11.265909 master-0 kubenswrapper[7146]: W0318 13:08:11.265583 7146 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265589 7146 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265597 7146 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265602 7146 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265607 7146 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265612 7146 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265616 7146 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265621 7146 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265625 7146 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265630 7146 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265634 7146 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265639 7146 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265643 7146 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265648 7146 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265653 7146 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265657 7146 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265662 7146 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265668 7146 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265673 7146 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:08:11.266399 master-0 kubenswrapper[7146]: W0318 13:08:11.265679 7146 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:08:11.266896 master-0 kubenswrapper[7146]: W0318 13:08:11.265683 7146 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:08:11.266896 master-0 kubenswrapper[7146]: W0318 13:08:11.265688 7146 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:08:11.266896 master-0 kubenswrapper[7146]: W0318 13:08:11.265692 7146 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:08:11.266896 master-0 kubenswrapper[7146]: W0318 13:08:11.265696 7146 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:08:11.266896 master-0 kubenswrapper[7146]: W0318 13:08:11.265784 7146 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:08:11.266896 master-0 kubenswrapper[7146]: W0318 13:08:11.265792 7146 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:08:11.266896 master-0 kubenswrapper[7146]: I0318 13:08:11.265800 7146 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:08:11.266896 master-0 kubenswrapper[7146]: W0318 13:08:11.266070 7146 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:08:11.266896 master-0 kubenswrapper[7146]: W0318 13:08:11.266083 7146 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:08:11.266896 master-0 kubenswrapper[7146]: W0318 13:08:11.266087 7146 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:08:11.266896 master-0 kubenswrapper[7146]: W0318 13:08:11.266091 7146 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:08:11.266896 master-0 kubenswrapper[7146]: W0318 13:08:11.266095 7146 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:08:11.266896 master-0 kubenswrapper[7146]: W0318 13:08:11.266099 7146 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:08:11.266896 master-0 kubenswrapper[7146]: W0318 13:08:11.266102 7146 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:08:11.266896 master-0 kubenswrapper[7146]: W0318 13:08:11.266106 7146 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266110 7146 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266113 7146 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266117 7146 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266121 7146 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266126 7146 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266131 7146 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266135 7146 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266233 7146 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266239 7146 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266244 7146 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266250 7146 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266255 7146 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266259 7146 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266263 7146 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266270 7146 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266277 7146 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266282 7146 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266286 7146 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:08:11.267297 master-0 kubenswrapper[7146]: W0318 13:08:11.266291 7146 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266295 7146 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266300 7146 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266304 7146 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266309 7146 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266313 7146 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266318 7146 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266331 7146 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266336 7146 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266340 7146 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266343 7146 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266347 7146 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266350 7146 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266354 7146 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266420 7146 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266428 7146 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266434 7146 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266439 7146 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266445 7146 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266450 7146 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:08:11.267781 master-0 kubenswrapper[7146]: W0318 13:08:11.266454 7146 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266459 7146 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266463 7146 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266467 7146 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266472 7146 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266477 7146 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266481 7146 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266485 7146 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266489 7146 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266494 7146 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266498 7146 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266503 7146 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266509 7146 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266514 7146 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266518 7146 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266521 7146 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266525 7146 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266528 7146 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266532 7146 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266535 7146 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:08:11.268306 master-0 kubenswrapper[7146]: W0318 13:08:11.266539 7146 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:08:11.268750 master-0 kubenswrapper[7146]: W0318 13:08:11.266542 7146 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:08:11.268750 master-0 kubenswrapper[7146]: W0318 13:08:11.266546 7146 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:08:11.268750 master-0 kubenswrapper[7146]: W0318 13:08:11.266551 7146 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:08:11.268750 master-0 kubenswrapper[7146]: W0318 13:08:11.266555 7146 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:08:11.268750 master-0 kubenswrapper[7146]: W0318 13:08:11.266615 7146 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:08:11.268750 master-0 kubenswrapper[7146]: I0318 13:08:11.266624 7146 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:08:11.268750 master-0 kubenswrapper[7146]: I0318 13:08:11.266881 7146 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 13:08:11.269201 master-0 kubenswrapper[7146]: I0318 13:08:11.269169 7146 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 18 13:08:11.269288 master-0 kubenswrapper[7146]: I0318 13:08:11.269265 7146 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 18 13:08:11.269528 master-0 kubenswrapper[7146]: I0318 13:08:11.269506 7146 server.go:997] "Starting client certificate rotation" Mar 18 13:08:11.269528 master-0 kubenswrapper[7146]: I0318 13:08:11.269524 7146 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 13:08:11.270077 master-0 kubenswrapper[7146]: I0318 13:08:11.269985 7146 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 12:57:27 +0000 UTC, rotation deadline is 2026-03-19 06:10:45.36291191 +0000 UTC Mar 18 13:08:11.270077 master-0 kubenswrapper[7146]: I0318 13:08:11.270063 7146 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h2m34.09285219s for next certificate rotation Mar 18 13:08:11.270407 master-0 kubenswrapper[7146]: I0318 13:08:11.270376 7146 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 13:08:11.271716 master-0 kubenswrapper[7146]: I0318 13:08:11.271688 7146 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 13:08:11.274899 master-0 kubenswrapper[7146]: I0318 13:08:11.274870 7146 log.go:25] "Validated CRI v1 runtime API" Mar 18 13:08:11.277091 master-0 kubenswrapper[7146]: I0318 13:08:11.277061 7146 log.go:25] "Validated CRI v1 image API" Mar 18 13:08:11.278018 master-0 kubenswrapper[7146]: I0318 13:08:11.277974 7146 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 13:08:11.282183 master-0 kubenswrapper[7146]: I0318 13:08:11.282137 7146 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 81ff0aa5-030f-4028-8e1c-14208afe7bfb:/dev/vda3 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 18 13:08:11.282509 master-0 kubenswrapper[7146]: I0318 13:08:11.282167 7146 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0e106377d9d72c29f7e269aa5cfc10e2e71a7440e3f167ac189e9be6ef45a160/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0e106377d9d72c29f7e269aa5cfc10e2e71a7440e3f167ac189e9be6ef45a160/userdata/shm major:0 minor:253 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/13d61ed6ba86dc97c981be717623436660fa98958fd1c017e06b3a4ec064f769/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/13d61ed6ba86dc97c981be717623436660fa98958fd1c017e06b3a4ec064f769/userdata/shm major:0 minor:251 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1d24886475858f5204b2d9c7e7c2ee25187c07d787043c501faeeb9daa42c75a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1d24886475858f5204b2d9c7e7c2ee25187c07d787043c501faeeb9daa42c75a/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2bf4b712cae2c0ee4c12f11a2e43506e7388879dae59520e9018e8abfe05f277/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2bf4b712cae2c0ee4c12f11a2e43506e7388879dae59520e9018e8abfe05f277/userdata/shm major:0 minor:255 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/41d00c84c5dd2a7b766779b7fb3bdc8cf10f974b3a66441fa0ef99e90cc55075/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/41d00c84c5dd2a7b766779b7fb3bdc8cf10f974b3a66441fa0ef99e90cc55075/userdata/shm major:0 minor:168 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/513bdda53b682c95f37d2cf2baf57e4a5453627fbbd061d754ec2aa3ba42bd1d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/513bdda53b682c95f37d2cf2baf57e4a5453627fbbd061d754ec2aa3ba42bd1d/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/70245a400781d2a78e9a27a22733df63043f95251d557ab2f0c87663ff3421fb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/70245a400781d2a78e9a27a22733df63043f95251d557ab2f0c87663ff3421fb/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7425e13d893a722522240c3707c6140f8bfd0028da6287165144b7322ebf69c4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7425e13d893a722522240c3707c6140f8bfd0028da6287165144b7322ebf69c4/userdata/shm major:0 minor:274 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7d26e6fa0dd7672f718650a12401a8514bc9d4479825421f550493d0cc0ccae9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7d26e6fa0dd7672f718650a12401a8514bc9d4479825421f550493d0cc0ccae9/userdata/shm major:0 minor:99 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/80f499fc9da9c64215ee87b646e32e10113a3a485956cfb50964722faffe7405/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/80f499fc9da9c64215ee87b646e32e10113a3a485956cfb50964722faffe7405/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8207c4419d89bbef00d1216664ff051dff0278775861444c1650cbc77aa43b89/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8207c4419d89bbef00d1216664ff051dff0278775861444c1650cbc77aa43b89/userdata/shm major:0 minor:261 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8385307c04cfef148742b5dc0fc754e1e2dc3ea11d3ddc8ec5d773d4246273b6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8385307c04cfef148742b5dc0fc754e1e2dc3ea11d3ddc8ec5d773d4246273b6/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8c177b73cce0c7f3cc26e5c3b6432debd234f03c681f0879af00f2a71a8d7119/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8c177b73cce0c7f3cc26e5c3b6432debd234f03c681f0879af00f2a71a8d7119/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95165b81eb17d7c2c28d6429f46259466ca6d0bdd237f4679d2704ef98282f29/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95165b81eb17d7c2c28d6429f46259466ca6d0bdd237f4679d2704ef98282f29/userdata/shm major:0 minor:272 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9e0ad9e4d46022da9225ef1364382c88cb4b32388cd7035e1c00337bf6332812/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9e0ad9e4d46022da9225ef1364382c88cb4b32388cd7035e1c00337bf6332812/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a296b3a4304897ed7576fe44b518ce3d5fa743a93c59d520901b7af6be80a014/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a296b3a4304897ed7576fe44b518ce3d5fa743a93c59d520901b7af6be80a014/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ab383e41a32fa393eda648ad2e2329488744a0ae30fb174d7a553710f1fa274a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ab383e41a32fa393eda648ad2e2329488744a0ae30fb174d7a553710f1fa274a/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b71043687eba73124ca20af7839f57eeabe61687cf875f84c32f9f4a213acec8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b71043687eba73124ca20af7839f57eeabe61687cf875f84c32f9f4a213acec8/userdata/shm major:0 minor:249 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c0d9adef366d9f45b6f81e678d5b5bc6f1e841f8a49fa5033e91c2416ca478ff/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c0d9adef366d9f45b6f81e678d5b5bc6f1e841f8a49fa5033e91c2416ca478ff/userdata/shm major:0 minor:269 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c707555fde8547aced9e196e247cc976f2be6c845c60b160a99fcce91955e9be/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c707555fde8547aced9e196e247cc976f2be6c845c60b160a99fcce91955e9be/userdata/shm major:0 minor:115 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cad2dea033992ed333b90156af54dbe232cb8e77ea3617a7c7559f870c46bf61/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cad2dea033992ed333b90156af54dbe232cb8e77ea3617a7c7559f870c46bf61/userdata/shm major:0 minor:257 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f3d8252ff99e6f3ec6168c39c11836a42f248fb2decc89a0e7aa350479c27f97/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f3d8252ff99e6f3ec6168c39c11836a42f248fb2decc89a0e7aa350479c27f97/userdata/shm major:0 minor:259 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f4b4a14ca39c077cf2f2b693e9d1fdd626838af6ef821c8cff1c28e6bf2b25ae/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f4b4a14ca39c077cf2f2b693e9d1fdd626838af6ef821c8cff1c28e6bf2b25ae/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f8ae6d060a44d48f0a3c581d701c99ae6804b630252206cc7208922bed8db289/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f8ae6d060a44d48f0a3c581d701c99ae6804b630252206cc7208922bed8db289/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fac381b9cc8f57cf484158ad2ce85bb2e249b3b6d72791568b3a2df54f9f4083/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fac381b9cc8f57cf484158ad2ce85bb2e249b3b6d72791568b3a2df54f9f4083/userdata/shm major:0 minor:318 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0213214b-693b-411b-8254-48d7826011eb/volumes/kubernetes.io~projected/kube-api-access-xcm8d:{mountpoint:/var/lib/kubelet/pods/0213214b-693b-411b-8254-48d7826011eb/volumes/kubernetes.io~projected/kube-api-access-xcm8d major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0213214b-693b-411b-8254-48d7826011eb/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0213214b-693b-411b-8254-48d7826011eb/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/162e25c0-761c-4414-8c29-f6931afdb7b2/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/162e25c0-761c-4414-8c29-f6931afdb7b2/volumes/kubernetes.io~projected/kube-api-access major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16a930da-d793-486f-bcef-cf042d3c427d/volumes/kubernetes.io~projected/kube-api-access-5gv8b:{mountpoint:/var/lib/kubelet/pods/16a930da-d793-486f-bcef-cf042d3c427d/volumes/kubernetes.io~projected/kube-api-access-5gv8b major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16a930da-d793-486f-bcef-cf042d3c427d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/16a930da-d793-486f-bcef-cf042d3c427d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ad580a2-7f58-4d66-adad-0a53d9777655/volumes/kubernetes.io~projected/kube-api-access-cw64j:{mountpoint:/var/lib/kubelet/pods/1ad580a2-7f58-4d66-adad-0a53d9777655/volumes/kubernetes.io~projected/kube-api-access-cw64j major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ad580a2-7f58-4d66-adad-0a53d9777655/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1ad580a2-7f58-4d66-adad-0a53d9777655/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~projected/kube-api-access-4mlkj:{mountpoint:/var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~projected/kube-api-access-4mlkj major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~secret/etcd-client major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volumes/kubernetes.io~projected/kube-api-access-wnvfd:{mountpoint:/var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volumes/kubernetes.io~projected/kube-api-access-wnvfd major:0 minor:164 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:129 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2cad2401-dab1-49f7-870e-a742ebfe323f/volumes/kubernetes.io~projected/kube-api-access-rv9m7:{mountpoint:/var/lib/kubelet/pods/2cad2401-dab1-49f7-870e-a742ebfe323f/volumes/kubernetes.io~projected/kube-api-access-rv9m7 major:0 minor:317 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/330df925-8429-4b96-9bfe-caa017c21afa/volumes/kubernetes.io~projected/kube-api-access-2sqzx:{mountpoint:/var/lib/kubelet/pods/330df925-8429-4b96-9bfe-caa017c21afa/volumes/kubernetes.io~projected/kube-api-access-2sqzx major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/35925474-e3fe-4cff-aad6-d853816618c7/volumes/kubernetes.io~projected/kube-api-access-dzblt:{mountpoint:/var/lib/kubelet/pods/35925474-e3fe-4cff-aad6-d853816618c7/volumes/kubernetes.io~projected/kube-api-access-dzblt major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/369e9689-e2f6-4276-b096-8db094f8d6ae/volumes/kubernetes.io~projected/kube-api-access-crbvx:{mountpoint:/var/lib/kubelet/pods/369e9689-e2f6-4276-b096-8db094f8d6ae/volumes/kubernetes.io~projected/kube-api-access-crbvx major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/36db10b8-33a2-4b54-85e2-9809eb6bc37d/volumes/kubernetes.io~projected/kube-api-access-bkdqs:{mountpoint:/var/lib/kubelet/pods/36db10b8-33a2-4b54-85e2-9809eb6bc37d/volumes/kubernetes.io~projected/kube-api-access-bkdqs major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971/volumes/kubernetes.io~projected/kube-api-access-qwfnk:{mountpoint:/var/lib/kubelet/pods/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971/volumes/kubernetes.io~projected/kube-api-access-qwfnk major:0 minor:271 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4086d06f-d50e-4632-9da7-508909429eef/volumes/kubernetes.io~projected/kube-api-access-w4lx2:{mountpoint:/var/lib/kubelet/pods/4086d06f-d50e-4632-9da7-508909429eef/volumes/kubernetes.io~projected/kube-api-access-w4lx2 major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/46ae7b31-c91c-477e-a04a-a3a8541747be/volumes/kubernetes.io~projected/kube-api-access-zwsns:{mountpoint:/var/lib/kubelet/pods/46ae7b31-c91c-477e-a04a-a3a8541747be/volumes/kubernetes.io~projected/kube-api-access-zwsns major:0 minor:114 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/47f82c03-65d1-4a6c-ba09-8a00ae778009/volumes/kubernetes.io~projected/kube-api-access-ghzrb:{mountpoint:/var/lib/kubelet/pods/47f82c03-65d1-4a6c-ba09-8a00ae778009/volumes/kubernetes.io~projected/kube-api-access-ghzrb major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4bc77989-ecfc-4500-92a0-18c2b3b78408/volumes/kubernetes.io~projected/kube-api-access-brvlj:{mountpoint:/var/lib/kubelet/pods/4bc77989-ecfc-4500-92a0-18c2b3b78408/volumes/kubernetes.io~projected/kube-api-access-brvlj major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4bc77989-ecfc-4500-92a0-18c2b3b78408/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/4bc77989-ecfc-4500-92a0-18c2b3b78408/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5bccf60c-5b07-4f40-8430-12bfb62661c7/volumes/kubernetes.io~projected/kube-api-access-4b6rn:{mountpoint:/var/lib/kubelet/pods/5bccf60c-5b07-4f40-8430-12bfb62661c7/volumes/kubernetes.io~projected/kube-api-access-4b6rn major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5e691486-8540-4b79-8eed-b0fb829071db/volumes/kubernetes.io~projected/kube-api-access-lpl28:{mountpoint:/var/lib/kubelet/pods/5e691486-8540-4b79-8eed-b0fb829071db/volumes/kubernetes.io~projected/kube-api-access-lpl28 major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4/volumes/kubernetes.io~projected/kube-api-access-h8v5n:{mountpoint:/var/lib/kubelet/pods/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4/volumes/kubernetes.io~projected/kube-api-access-h8v5n major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/83a4f641-d28f-42aa-a228-f6086d720fe4/volumes/kubernetes.io~projected/kube-api-access-9hb2q:{mountpoint:/var/lib/kubelet/pods/83a4f641-d28f-42aa-a228-f6086d720fe4/volumes/kubernetes.io~projected/kube-api-access-9hb2q major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/83a4f641-d28f-42aa-a228-f6086d720fe4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/83a4f641-d28f-42aa-a228-f6086d720fe4/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a0944d2-d99a-42eb-81f5-a212b750b8f4/volumes/kubernetes.io~projected/kube-api-access-882b8:{mountpoint:/var/lib/kubelet/pods/8a0944d2-d99a-42eb-81f5-a212b750b8f4/volumes/kubernetes.io~projected/kube-api-access-882b8 major:0 minor:94 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a0944d2-d99a-42eb-81f5-a212b750b8f4/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/8a0944d2-d99a-42eb-81f5-a212b750b8f4/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ce8e99d-7b02-4bf4-a438-adde851918cb/volumes/kubernetes.io~projected/kube-api-access-r8dfw:{mountpoint:/var/lib/kubelet/pods/8ce8e99d-7b02-4bf4-a438-adde851918cb/volumes/kubernetes.io~projected/kube-api-access-r8dfw major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ce8e99d-7b02-4bf4-a438-adde851918cb/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8ce8e99d-7b02-4bf4-a438-adde851918cb/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/906c0fd3-3bcd-4c6c-8505-b3517bae06b4/volumes/kubernetes.io~projected/kube-api-access-rgh46:{mountpoint:/var/lib/kubelet/pods/906c0fd3-3bcd-4c6c-8505-b3517bae06b4/volumes/kubernetes.io~projected/kube-api-access-rgh46 major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/93ea3c78-dede-468f-89a5-551133f794c5/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/93ea3c78-dede-468f-89a5-551133f794c5/volumes/kubernetes.io~projected/kube-api-access major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/93ea3c78-dede-468f-89a5-551133f794c5/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/93ea3c78-dede-468f-89a5-551133f794c5/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a01c92f5-7938-437d-8262-11598bd8023c/volumes/kubernetes.io~projected/kube-api-access-qc69w:{mountpoint:/var/lib/kubelet/pods/a01c92f5-7938-437d-8262-11598bd8023c/volumes/kubernetes.io~projected/kube-api-access-qc69w major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2c4572e-0b38-4db1-96e5-6a35e29048e7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/c2c4572e-0b38-4db1-96e5-6a35e29048e7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2c4572e-0b38-4db1-96e5-6a35e29048e7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c2c4572e-0b38-4db1-96e5-6a35e29048e7/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c9a9baa5-9334-47dc-8d0c-eafc96a679b3/volumes/kubernetes.io~projected/kube-api-access-z9tzl:{mountpoint:/var/lib/kubelet/pods/c9a9baa5-9334-47dc-8d0c-eafc96a679b3/volumes/kubernetes.io~projected/kube-api-access-z9tzl major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c9a9baa5-9334-47dc-8d0c-eafc96a679b3/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c9a9baa5-9334-47dc-8d0c-eafc96a679b3/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cb471665-2b07-48df-9881-3fb663390b23/volumes/kubernetes.io~projected/kube-api-access-6f8xk:{mountpoint:/var/lib/kubelet/pods/cb471665-2b07-48df-9881-3fb663390b23/volumes/kubernetes.io~projected/kube-api-access-6f8xk major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cb471665-2b07-48df-9881-3fb663390b23/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/cb471665-2b07-48df-9881-3fb663390b23/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da6a763d-2777-40c4-ae1f-c77ced406ea2/volumes/kubernetes.io~projected/kube-api-access-lhqk9:{mountpoint:/var/lib/kubelet/pods/da6a763d-2777-40c4-ae1f-c77ced406ea2/volumes/kubernetes.io~projected/kube-api-access-lhqk9 major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41/volumes/kubernetes.io~projected/kube-api-access major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eb8907fd-35dd-452a-8032-f2f95a6e553a/volumes/kubernetes.io~projected/kube-api-access-k254v:{mountpoint:/var/lib/kubelet/pods/eb8907fd-35dd-452a-8032-f2f95a6e553a/volumes/kubernetes.io~projected/kube-api-access-k254v major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eb8907fd-35dd-452a-8032-f2f95a6e553a/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/eb8907fd-35dd-452a-8032-f2f95a6e553a/volumes/kubernetes.io~secret/webhook-cert major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee1eb80b-5a76-443f-a534-54d5bdc0c98a/volumes/kubernetes.io~projected/kube-api-access-qvxs4:{mountpoint:/var/lib/kubelet/pods/ee1eb80b-5a76-443f-a534-54d5bdc0c98a/volumes/kubernetes.io~projected/kube-api-access-qvxs4 major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2b92a53-0b61-4e1d-a306-f9a498e48b38/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/f2b92a53-0b61-4e1d-a306-f9a498e48b38/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2b92a53-0b61-4e1d-a306-f9a498e48b38/volumes/kubernetes.io~projected/kube-api-access-j5mgr:{mountpoint:/var/lib/kubelet/pods/f2b92a53-0b61-4e1d-a306-f9a498e48b38/volumes/kubernetes.io~projected/kube-api-access-j5mgr major:0 minor:235 fsType:tmpfs blockSize:0} overlay_0-101:{mountpoint:/var/lib/containers/storage/overlay/3423d0369db68916018e7a90bfb647c23e66e99bc6963c3f17354dd44adb5421/merged major:0 minor:101 fsType:overlay blockSize:0} overlay_0-103:{mountpoint:/var/lib/containers/storage/overlay/1ca71bb35b93561fdd850d154d00f44091fffa2c78deea100104aec8292be872/merged major:0 minor:103 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/da0fcf42ec550bd7c8605d80fcf425538b557005211207b6f5bc8b3a9d97ca37/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-117:{mountpoint:/var/lib/containers/storage/overlay/3013797d391afb903ba508a0744a36b2375ed50964372d0a6adcdfd2b502eebd/merged major:0 minor:117 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/ab0ddbf8738a8c996629dfa02e3045d5dbbd0f2fe7e3033d1c1888e4d91fd318/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-124:{mountpoint:/var/lib/containers/storage/overlay/9f2cc4def61523aaf33fe56a40bec1b24565287c13681abcdab9538624dc0f62/merged major:0 minor:124 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/362ebcc3008b14449b9214bbcb594366c6de16e857328c1ec03aed820bc0d3dd/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/99aad264c440f3c311b281fa876cfd6437f890d3e6a63ad074cfafc1a9d61aad/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/61b80e8331b24d3d1d12307d3c2dcbfdab09c1f78fd249105db1958f96a5bcf3/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-147:{mountpoint:/var/lib/containers/storage/overlay/d1151958e9bcd57012f9df080cbcd4f154a92a698b67b450c6e6a661cc6b8165/merged major:0 minor:147 fsType:overlay blockSize:0} overlay_0-149:{mountpoint:/var/lib/containers/storage/overlay/9a03311962d9651f47e3fd4e1cb6188a5db805fc9509058013b0098bfb915779/merged major:0 minor:149 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/e3e3168e70147df41769069bee96791517b3d1daf53c0bc28fb8666f3fe160aa/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-160:{mountpoint:/var/lib/containers/storage/overlay/28823559ae57bfa39e7883d7e18573bbf1fb4c18e17a8a10ab955b5a27cf0ee4/merged major:0 minor:160 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/d1f14c8505eece3b4ec54813450ac2f9b79e19293a183c5465c050b31e419e01/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/a73aef02c5a14e2474c4b613568ba63cf44d11d3fb49d97d48f19082dab856f0/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/62271c62bf15244d70720519ff8af6db78a4f8e599fc1b525dad671e754641c9/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/1e454559105307544df6bf62cb87e50e6fb836862d7106ba551da41383b3806f/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/e9ad479b6e63f9b8ee129130e94fe6b16984acdbd0f6e1a218ce10a7c4040be8/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/37415ef378c2d4a07f2c4c6a96a8024ae0462bfc693247debea0d79612b5e58d/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/ebb15355309b34ffc9b37d1e605fe08e55bda4fc54851b16984a0b2d6045aeca/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/dae2ce68a255a8a0b4321d9796f9e4c2c788da7163c0b315c03d96d19f0855c6/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/b50d9fe50e007e0d368e36c60f55b1d333d687568bc5358f5f72f5477dd703cb/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-276:{mountpoint:/var/lib/containers/storage/overlay/bdc8bc7ecfa859111d21f5a7d2e21fa3955a2c5e9761c8d41f541c77980ceb6c/merged major:0 minor:276 fsType:overlay blockSize:0} overlay_0-278:{mountpoint:/var/lib/containers/storage/overlay/589e3e55bc1cc76b94a111da89c4d5fe02807d858306aadd45a34d9b91caa168/merged major:0 minor:278 fsType:overlay blockSize:0} overlay_0-280:{mountpoint:/var/lib/containers/storage/overlay/71b9d8dcd422093bd612fabdf8b8824cc0c13755a517d3d757847992a3582758/merged major:0 minor:280 fsType:overlay blockSize:0} overlay_0-282:{mountpoint:/var/lib/containers/storage/overlay/7d6dc61f5707e3c0cb7237c51e3ed5bb3bad665119afdf7897f82dd1b542b88a/merged major:0 minor:282 fsType:overlay blockSize:0} overlay_0-284:{mountpoint:/var/lib/containers/storage/overlay/959bc1ef2818b0e01b9b87596cd6f0c8ee90139a85c0a119f20c29364af4da62/merged major:0 minor:284 fsType:overlay blockSize:0} overlay_0-286:{mountpoint:/var/lib/containers/storage/overlay/44276f3c489655dcb7bb5779ac51f2b0dcabe74257413b49e99fce0e751fa841/merged major:0 minor:286 fsType:overlay blockSize:0} overlay_0-288:{mountpoint:/var/lib/containers/storage/overlay/b61203dfc553cb59f57c91bbf9a9caea8c8de0804ea7b92d935086c90d77cb12/merged major:0 minor:288 fsType:overlay blockSize:0} overlay_0-290:{mountpoint:/var/lib/containers/storage/overlay/c45218af49d7e41baec011cbb33667e98e84d370728601dd5b5f3f4cbee8edb7/merged major:0 minor:290 fsType:overlay blockSize:0} overlay_0-292:{mountpoint:/var/lib/containers/storage/overlay/aa4754b567bce24d0db62309e5a6ebdec6a331e02bf0f2dfa6a6675e09a8f50d/merged major:0 minor:292 fsType:overlay blockSize:0} overlay_0-294:{mountpoint:/var/lib/containers/storage/overlay/57963f10af35de1bd893eacca618cd775c06c7e37582d53acb7390c45be2a158/merged major:0 minor:294 fsType:overlay blockSize:0} overlay_0-296:{mountpoint:/var/lib/containers/storage/overlay/3c19c0808088204bf1a9afc8a14658ce432f194ce408a36623995c3acda01096/merged major:0 minor:296 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/6d96ad5f57032fb85bd12cd2dc3c682c9a8216e68c0668bcf577bdde0a866adc/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-300:{mountpoint:/var/lib/containers/storage/overlay/3bf2cebae1f296da2250f2c959d1d735d2ccb96c5becd36064ee428321e36cdf/merged major:0 minor:300 fsType:overlay blockSize:0} overlay_0-302:{mountpoint:/var/lib/containers/storage/overlay/a936b5c249293ed73f7dc1fafbd3c35a6c5db38c70748d2d4f90769b855880d7/merged major:0 minor:302 fsType:overlay blockSize:0} overlay_0-304:{mountpoint:/var/lib/containers/storage/overlay/f1ee6c537b44b112415b4f08279ec5c0a52c03e06efc71fe06e65227cbd9be36/merged major:0 minor:304 fsType:overlay blockSize:0} overlay_0-310:{mountpoint:/var/lib/containers/storage/overlay/b966b97868f0365f29e8375b05b5d18650c0e495bd49a5ae64e2767c6de41bd4/merged major:0 minor:310 fsType:overlay blockSize:0} overlay_0-323:{mountpoint:/var/lib/containers/storage/overlay/bd694355e509c1a139cbb4de4360d88d395e5d1478ff6bd25531159faff533ae/merged major:0 minor:323 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/d390a8ac535c384db5ec540d52eb6f1fc4a2ba1e57dfb55de2f4ef796523af85/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/bad4bdad8bd5ae2d3d59b335414f520b766524570df80163c2c35b77a76abc2a/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/4b820c4acfa9d77a1ca46da7dc23507a0f1091aa1f9f16c07caf49879251c3c0/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/46cd0efbce87aed0b1a3a3125b801b4b973347c0d7eceb1b72204da537eb1534/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/1c1c1d90f0ea83712dadc11981999484ef730c2b177ce7893f25c5698debfe2c/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/cad2a0e30de56b697ea6adf23a5ea1e3d93656624de651a711c1352aa545ea67/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/ca332885725ff5336ad71656d581a6d3fa68a4674719e297f4c0046db0b24774/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/462c4df42ebba75e937a947d588cd5358aed075826d164462efa3f204b4f9eaf/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/a1f25269a884449e43f3e2de1a0677e80031ab84a634f247e833490be1497ea1/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/3ef304214faae9fe7f545562be4481e99132f68bfb2ce574b9183c20c5abd7a5/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/b0179bec999c30de269eeba7b6df7aa9bdc494198feef3ef8728b4dd12b94fde/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-81:{mountpoint:/var/lib/containers/storage/overlay/2b8b2a7e82cb62283a2af86c744485134cce26d12de8260bde043750d45bbc48/merged major:0 minor:81 fsType:overlay blockSize:0} overlay_0-83:{mountpoint:/var/lib/containers/storage/overlay/eafb80a1bbcbad64dbd236c9708a3445e2b8618f7712a68004c0b3f884c3ec42/merged major:0 minor:83 fsType:overlay blockSize:0}] Mar 18 13:08:11.303488 master-0 kubenswrapper[7146]: I0318 13:08:11.302651 7146 manager.go:217] Machine: {Timestamp:2026-03-18 13:08:11.301433658 +0000 UTC m=+0.109651039 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ba707060b4b44f7a95adbd0306be6534 SystemUUID:ba707060-b4b4-4f7a-95ad-bd0306be6534 BootID:d4169b54-c5ea-4f66-b18c-82f9506641bd Filesystems:[{Device:/run/containers/storage/overlay-containers/c707555fde8547aced9e196e247cc976f2be6c845c60b160a99fcce91955e9be/userdata/shm DeviceMajor:0 DeviceMinor:115 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-302 DeviceMajor:0 DeviceMinor:302 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9e0ad9e4d46022da9225ef1364382c88cb4b32388cd7035e1c00337bf6332812/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4bc77989-ecfc-4500-92a0-18c2b3b78408/volumes/kubernetes.io~projected/kube-api-access-brvlj DeviceMajor:0 DeviceMinor:126 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ab383e41a32fa393eda648ad2e2329488744a0ae30fb174d7a553710f1fa274a/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/513bdda53b682c95f37d2cf2baf57e4a5453627fbbd061d754ec2aa3ba42bd1d/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5e691486-8540-4b79-8eed-b0fb829071db/volumes/kubernetes.io~projected/kube-api-access-lpl28 DeviceMajor:0 DeviceMinor:123 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-147 DeviceMajor:0 DeviceMinor:147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-160 DeviceMajor:0 DeviceMinor:160 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:215 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-286 DeviceMajor:0 DeviceMinor:286 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-323 DeviceMajor:0 DeviceMinor:323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/70245a400781d2a78e9a27a22733df63043f95251d557ab2f0c87663ff3421fb/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-83 DeviceMajor:0 DeviceMinor:83 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f2b92a53-0b61-4e1d-a306-f9a498e48b38/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:234 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/36db10b8-33a2-4b54-85e2-9809eb6bc37d/volumes/kubernetes.io~projected/kube-api-access-bkdqs DeviceMajor:0 DeviceMinor:237 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-124 DeviceMajor:0 DeviceMinor:124 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0213214b-693b-411b-8254-48d7826011eb/volumes/kubernetes.io~projected/kube-api-access-xcm8d DeviceMajor:0 DeviceMinor:226 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2bf4b712cae2c0ee4c12f11a2e43506e7388879dae59520e9018e8abfe05f277/userdata/shm DeviceMajor:0 DeviceMinor:255 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971/volumes/kubernetes.io~projected/kube-api-access-qwfnk DeviceMajor:0 DeviceMinor:271 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c2c4572e-0b38-4db1-96e5-6a35e29048e7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:229 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f2b92a53-0b61-4e1d-a306-f9a498e48b38/volumes/kubernetes.io~projected/kube-api-access-j5mgr DeviceMajor:0 DeviceMinor:235 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/da6a763d-2777-40c4-ae1f-c77ced406ea2/volumes/kubernetes.io~projected/kube-api-access-lhqk9 DeviceMajor:0 DeviceMinor:245 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/906c0fd3-3bcd-4c6c-8505-b3517bae06b4/volumes/kubernetes.io~projected/kube-api-access-rgh46 DeviceMajor:0 DeviceMinor:241 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-101 DeviceMajor:0 DeviceMinor:101 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/46ae7b31-c91c-477e-a04a-a3a8541747be/volumes/kubernetes.io~projected/kube-api-access-zwsns DeviceMajor:0 DeviceMinor:114 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/41d00c84c5dd2a7b766779b7fb3bdc8cf10f974b3a66441fa0ef99e90cc55075/userdata/shm DeviceMajor:0 DeviceMinor:168 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0213214b-693b-411b-8254-48d7826011eb/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:228 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-296 DeviceMajor:0 DeviceMinor:296 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f8ae6d060a44d48f0a3c581d701c99ae6804b630252206cc7208922bed8db289/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c2c4572e-0b38-4db1-96e5-6a35e29048e7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4/volumes/kubernetes.io~projected/kube-api-access-h8v5n DeviceMajor:0 DeviceMinor:236 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b71043687eba73124ca20af7839f57eeabe61687cf875f84c32f9f4a213acec8/userdata/shm DeviceMajor:0 DeviceMinor:249 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a296b3a4304897ed7576fe44b518ce3d5fa743a93c59d520901b7af6be80a014/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-149 DeviceMajor:0 DeviceMinor:149 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1ad580a2-7f58-4d66-adad-0a53d9777655/volumes/kubernetes.io~projected/kube-api-access-cw64j DeviceMajor:0 DeviceMinor:233 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f4b4a14ca39c077cf2f2b693e9d1fdd626838af6ef821c8cff1c28e6bf2b25ae/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0e106377d9d72c29f7e269aa5cfc10e2e71a7440e3f167ac189e9be6ef45a160/userdata/shm DeviceMajor:0 DeviceMinor:253 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8207c4419d89bbef00d1216664ff051dff0278775861444c1650cbc77aa43b89/userdata/shm DeviceMajor:0 DeviceMinor:261 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-280 DeviceMajor:0 DeviceMinor:280 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-304 DeviceMajor:0 DeviceMinor:304 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cb471665-2b07-48df-9881-3fb663390b23/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c9a9baa5-9334-47dc-8d0c-eafc96a679b3/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/330df925-8429-4b96-9bfe-caa017c21afa/volumes/kubernetes.io~projected/kube-api-access-2sqzx DeviceMajor:0 DeviceMinor:244 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-288 DeviceMajor:0 DeviceMinor:288 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-294 DeviceMajor:0 DeviceMinor:294 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/93ea3c78-dede-468f-89a5-551133f794c5/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:246 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-276 DeviceMajor:0 DeviceMinor:276 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-284 DeviceMajor:0 DeviceMinor:284 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-290 DeviceMajor:0 DeviceMinor:290 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/80f499fc9da9c64215ee87b646e32e10113a3a485956cfb50964722faffe7405/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8a0944d2-d99a-42eb-81f5-a212b750b8f4/volumes/kubernetes.io~projected/kube-api-access-882b8 DeviceMajor:0 DeviceMinor:94 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/162e25c0-761c-4414-8c29-f6931afdb7b2/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:98 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5bccf60c-5b07-4f40-8430-12bfb62661c7/volumes/kubernetes.io~projected/kube-api-access-4b6rn DeviceMajor:0 DeviceMinor:243 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c0d9adef366d9f45b6f81e678d5b5bc6f1e841f8a49fa5033e91c2416ca478ff/userdata/shm DeviceMajor:0 DeviceMinor:269 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-310 DeviceMajor:0 DeviceMinor:310 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/83a4f641-d28f-42aa-a228-f6086d720fe4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/93ea3c78-dede-468f-89a5-551133f794c5/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95165b81eb17d7c2c28d6429f46259466ca6d0bdd237f4679d2704ef98282f29/userdata/shm DeviceMajor:0 DeviceMinor:272 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-292 DeviceMajor:0 DeviceMinor:292 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a01c92f5-7938-437d-8262-11598bd8023c/volumes/kubernetes.io~projected/kube-api-access-qc69w DeviceMajor:0 DeviceMinor:248 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eb8907fd-35dd-452a-8032-f2f95a6e553a/volumes/kubernetes.io~projected/kube-api-access-k254v DeviceMajor:0 DeviceMinor:138 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/16a930da-d793-486f-bcef-cf042d3c427d/volumes/kubernetes.io~projected/kube-api-access-5gv8b DeviceMajor:0 DeviceMinor:240 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/35925474-e3fe-4cff-aad6-d853816618c7/volumes/kubernetes.io~projected/kube-api-access-dzblt DeviceMajor:0 DeviceMinor:247 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8ce8e99d-7b02-4bf4-a438-adde851918cb/volumes/kubernetes.io~projected/kube-api-access-r8dfw DeviceMajor:0 DeviceMinor:225 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ee1eb80b-5a76-443f-a534-54d5bdc0c98a/volumes/kubernetes.io~projected/kube-api-access-qvxs4 DeviceMajor:0 DeviceMinor:239 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fac381b9cc8f57cf484158ad2ce85bb2e249b3b6d72791568b3a2df54f9f4083/userdata/shm DeviceMajor:0 DeviceMinor:318 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8a0944d2-d99a-42eb-81f5-a212b750b8f4/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volumes/kubernetes.io~projected/kube-api-access-wnvfd DeviceMajor:0 DeviceMinor:164 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7d26e6fa0dd7672f718650a12401a8514bc9d4479825421f550493d0cc0ccae9/userdata/shm DeviceMajor:0 DeviceMinor:99 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/cb471665-2b07-48df-9881-3fb663390b23/volumes/kubernetes.io~projected/kube-api-access-6f8xk DeviceMajor:0 DeviceMinor:242 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7425e13d893a722522240c3707c6140f8bfd0028da6287165144b7322ebf69c4/userdata/shm DeviceMajor:0 DeviceMinor:274 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-117 DeviceMajor:0 DeviceMinor:117 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/13d61ed6ba86dc97c981be717623436660fa98958fd1c017e06b3a4ec064f769/userdata/shm DeviceMajor:0 DeviceMinor:251 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/83a4f641-d28f-42aa-a228-f6086d720fe4/volumes/kubernetes.io~projected/kube-api-access-9hb2q DeviceMajor:0 DeviceMinor:224 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f3d8252ff99e6f3ec6168c39c11836a42f248fb2decc89a0e7aa350479c27f97/userdata/shm DeviceMajor:0 DeviceMinor:259 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-282 DeviceMajor:0 DeviceMinor:282 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2cad2401-dab1-49f7-870e-a742ebfe323f/volumes/kubernetes.io~projected/kube-api-access-rv9m7 DeviceMajor:0 DeviceMinor:317 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eb8907fd-35dd-452a-8032-f2f95a6e553a/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:139 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/369e9689-e2f6-4276-b096-8db094f8d6ae/volumes/kubernetes.io~projected/kube-api-access-crbvx DeviceMajor:0 DeviceMinor:227 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/47f82c03-65d1-4a6c-ba09-8a00ae778009/volumes/kubernetes.io~projected/kube-api-access-ghzrb DeviceMajor:0 DeviceMinor:230 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-81 DeviceMajor:0 DeviceMinor:81 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:232 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-103 DeviceMajor:0 DeviceMinor:103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4086d06f-d50e-4632-9da7-508909429eef/volumes/kubernetes.io~projected/kube-api-access-w4lx2 DeviceMajor:0 DeviceMinor:105 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1d24886475858f5204b2d9c7e7c2ee25187c07d787043c501faeeb9daa42c75a/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-278 DeviceMajor:0 DeviceMinor:278 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-300 DeviceMajor:0 DeviceMinor:300 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~projected/kube-api-access-4mlkj DeviceMajor:0 DeviceMinor:231 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c9a9baa5-9334-47dc-8d0c-eafc96a679b3/volumes/kubernetes.io~projected/kube-api-access-z9tzl DeviceMajor:0 DeviceMinor:238 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4bc77989-ecfc-4500-92a0-18c2b3b78408/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:125 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/16a930da-d793-486f-bcef-cf042d3c427d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1ad580a2-7f58-4d66-adad-0a53d9777655/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:129 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8c177b73cce0c7f3cc26e5c3b6432debd234f03c681f0879af00f2a71a8d7119/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8ce8e99d-7b02-4bf4-a438-adde851918cb/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cad2dea033992ed333b90156af54dbe232cb8e77ea3617a7c7559f870c46bf61/userdata/shm DeviceMajor:0 DeviceMinor:257 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8385307c04cfef148742b5dc0fc754e1e2dc3ea11d3ddc8ec5d773d4246273b6/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0e106377d9d72c2 MacAddress:ca:f1:6d:36:7c:a5 Speed:10000 Mtu:8900} {Name:13d61ed6ba86dc9 MacAddress:2e:a4:67:ba:ff:d9 Speed:10000 Mtu:8900} {Name:2bf4b712cae2c0e MacAddress:c6:59:ce:61:ee:96 Speed:10000 Mtu:8900} {Name:513bdda53b682c9 MacAddress:d6:20:3c:4d:f7:d8 Speed:10000 Mtu:8900} {Name:8207c4419d89bbe MacAddress:12:1d:f5:93:40:72 Speed:10000 Mtu:8900} {Name:8385307c04cfef1 MacAddress:2e:b9:68:9e:76:8d Speed:10000 Mtu:8900} {Name:8c177b73cce0c7f MacAddress:52:fa:58:e4:0b:f0 Speed:10000 Mtu:8900} {Name:95165b81eb17d7c MacAddress:66:49:9d:1f:e7:c1 Speed:10000 Mtu:8900} {Name:b71043687eba731 MacAddress:b6:f6:30:90:42:c4 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:72:5b:82:b7:84:c5 Speed:0 Mtu:8900} {Name:c0d9adef366d9f4 MacAddress:4a:e6:76:05:3f:d9 Speed:10000 Mtu:8900} {Name:cad2dea033992ed MacAddress:aa:87:c9:c3:c6:3c Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:25:c2:a7 Speed:-1 Mtu:9000} {Name:f3d8252ff99e6f3 MacAddress:1a:9f:85:20:ad:69 Speed:10000 Mtu:8900} {Name:fac381b9cc8f57c MacAddress:0a:07:a5:f4:05:00 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:12:bd:01:20:1c:b1 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 13:08:11.303488 master-0 kubenswrapper[7146]: I0318 13:08:11.303464 7146 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 13:08:11.303827 master-0 kubenswrapper[7146]: I0318 13:08:11.303589 7146 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 13:08:11.303896 master-0 kubenswrapper[7146]: I0318 13:08:11.303875 7146 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 13:08:11.304088 master-0 kubenswrapper[7146]: I0318 13:08:11.304047 7146 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 13:08:11.304289 master-0 kubenswrapper[7146]: I0318 13:08:11.304082 7146 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 13:08:11.304343 master-0 kubenswrapper[7146]: I0318 13:08:11.304312 7146 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 13:08:11.304343 master-0 kubenswrapper[7146]: I0318 13:08:11.304326 7146 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 13:08:11.304343 master-0 kubenswrapper[7146]: I0318 13:08:11.304336 7146 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 13:08:11.304428 master-0 kubenswrapper[7146]: I0318 13:08:11.304359 7146 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 13:08:11.304480 master-0 kubenswrapper[7146]: I0318 13:08:11.304468 7146 state_mem.go:36] "Initialized new in-memory state store" Mar 18 13:08:11.305113 master-0 kubenswrapper[7146]: I0318 13:08:11.305087 7146 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 13:08:11.305173 master-0 kubenswrapper[7146]: I0318 13:08:11.305152 7146 kubelet.go:418] "Attempting to sync node with API server" Mar 18 13:08:11.305173 master-0 kubenswrapper[7146]: I0318 13:08:11.305166 7146 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 13:08:11.305257 master-0 kubenswrapper[7146]: I0318 13:08:11.305180 7146 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 13:08:11.305257 master-0 kubenswrapper[7146]: I0318 13:08:11.305193 7146 kubelet.go:324] "Adding apiserver pod source" Mar 18 13:08:11.305257 master-0 kubenswrapper[7146]: I0318 13:08:11.305210 7146 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 13:08:11.306180 master-0 kubenswrapper[7146]: I0318 13:08:11.306120 7146 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 13:08:11.306302 master-0 kubenswrapper[7146]: I0318 13:08:11.306288 7146 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 18 13:08:11.306586 master-0 kubenswrapper[7146]: I0318 13:08:11.306566 7146 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 13:08:11.306704 master-0 kubenswrapper[7146]: I0318 13:08:11.306682 7146 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 13:08:11.306786 master-0 kubenswrapper[7146]: I0318 13:08:11.306707 7146 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 13:08:11.306786 master-0 kubenswrapper[7146]: I0318 13:08:11.306715 7146 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 13:08:11.306786 master-0 kubenswrapper[7146]: I0318 13:08:11.306723 7146 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 13:08:11.306786 master-0 kubenswrapper[7146]: I0318 13:08:11.306730 7146 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 13:08:11.306786 master-0 kubenswrapper[7146]: I0318 13:08:11.306737 7146 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 13:08:11.306786 master-0 kubenswrapper[7146]: I0318 13:08:11.306745 7146 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 13:08:11.306786 master-0 kubenswrapper[7146]: I0318 13:08:11.306752 7146 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 13:08:11.306786 master-0 kubenswrapper[7146]: I0318 13:08:11.306762 7146 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 13:08:11.306786 master-0 kubenswrapper[7146]: I0318 13:08:11.306770 7146 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 13:08:11.306786 master-0 kubenswrapper[7146]: I0318 13:08:11.306783 7146 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 13:08:11.307349 master-0 kubenswrapper[7146]: I0318 13:08:11.306797 7146 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 13:08:11.307349 master-0 kubenswrapper[7146]: I0318 13:08:11.306839 7146 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 13:08:11.307349 master-0 kubenswrapper[7146]: I0318 13:08:11.307168 7146 server.go:1280] "Started kubelet" Mar 18 13:08:11.307755 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 13:08:11.308098 master-0 kubenswrapper[7146]: I0318 13:08:11.307826 7146 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 13:08:11.308098 master-0 kubenswrapper[7146]: I0318 13:08:11.307885 7146 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 13:08:11.308324 master-0 kubenswrapper[7146]: I0318 13:08:11.308304 7146 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 13:08:11.308404 master-0 kubenswrapper[7146]: I0318 13:08:11.308363 7146 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 13:08:11.318684 master-0 kubenswrapper[7146]: I0318 13:08:11.318634 7146 server.go:449] "Adding debug handlers to kubelet server" Mar 18 13:08:11.319445 master-0 kubenswrapper[7146]: I0318 13:08:11.319429 7146 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 13:08:11.319646 master-0 kubenswrapper[7146]: I0318 13:08:11.319613 7146 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 12:57:27 +0000 UTC, rotation deadline is 2026-03-19 09:43:40.451078657 +0000 UTC Mar 18 13:08:11.319709 master-0 kubenswrapper[7146]: I0318 13:08:11.319699 7146 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h35m29.131384638s for next certificate rotation Mar 18 13:08:11.319782 master-0 kubenswrapper[7146]: I0318 13:08:11.319768 7146 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 13:08:11.320531 master-0 kubenswrapper[7146]: I0318 13:08:11.320488 7146 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 13:08:11.320531 master-0 kubenswrapper[7146]: I0318 13:08:11.320526 7146 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 13:08:11.320610 master-0 kubenswrapper[7146]: E0318 13:08:11.320531 7146 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:08:11.321042 master-0 kubenswrapper[7146]: I0318 13:08:11.321017 7146 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 13:08:11.322984 master-0 kubenswrapper[7146]: I0318 13:08:11.321795 7146 factory.go:55] Registering systemd factory Mar 18 13:08:11.322984 master-0 kubenswrapper[7146]: I0318 13:08:11.321821 7146 factory.go:221] Registration of the systemd container factory successfully Mar 18 13:08:11.330255 master-0 kubenswrapper[7146]: I0318 13:08:11.330226 7146 factory.go:153] Registering CRI-O factory Mar 18 13:08:11.330255 master-0 kubenswrapper[7146]: I0318 13:08:11.330253 7146 factory.go:221] Registration of the crio container factory successfully Mar 18 13:08:11.330434 master-0 kubenswrapper[7146]: I0318 13:08:11.330354 7146 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 13:08:11.330434 master-0 kubenswrapper[7146]: I0318 13:08:11.330377 7146 factory.go:103] Registering Raw factory Mar 18 13:08:11.330434 master-0 kubenswrapper[7146]: I0318 13:08:11.330391 7146 manager.go:1196] Started watching for new ooms in manager Mar 18 13:08:11.331287 master-0 kubenswrapper[7146]: I0318 13:08:11.331268 7146 manager.go:319] Starting recovery of all containers Mar 18 13:08:11.334399 master-0 kubenswrapper[7146]: I0318 13:08:11.334346 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cb471665-2b07-48df-9881-3fb663390b23" volumeName="kubernetes.io/configmap/cb471665-2b07-48df-9881-3fb663390b23-config" seLinuxMountContext="" Mar 18 13:08:11.334468 master-0 kubenswrapper[7146]: I0318 13:08:11.334403 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="35925474-e3fe-4cff-aad6-d853816618c7" volumeName="kubernetes.io/projected/35925474-e3fe-4cff-aad6-d853816618c7-kube-api-access-dzblt" seLinuxMountContext="" Mar 18 13:08:11.334468 master-0 kubenswrapper[7146]: I0318 13:08:11.334421 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4086d06f-d50e-4632-9da7-508909429eef" volumeName="kubernetes.io/projected/4086d06f-d50e-4632-9da7-508909429eef-kube-api-access-w4lx2" seLinuxMountContext="" Mar 18 13:08:11.334468 master-0 kubenswrapper[7146]: I0318 13:08:11.334433 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73c93ee3-cf14-4fea-b2a7-ccfb56e55be4" volumeName="kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-kube-api-access-h8v5n" seLinuxMountContext="" Mar 18 13:08:11.334468 master-0 kubenswrapper[7146]: I0318 13:08:11.334464 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ce8e99d-7b02-4bf4-a438-adde851918cb" volumeName="kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-service-ca-bundle" seLinuxMountContext="" Mar 18 13:08:11.334563 master-0 kubenswrapper[7146]: I0318 13:08:11.334477 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="93ea3c78-dede-468f-89a5-551133f794c5" volumeName="kubernetes.io/projected/93ea3c78-dede-468f-89a5-551133f794c5-kube-api-access" seLinuxMountContext="" Mar 18 13:08:11.334563 master-0 kubenswrapper[7146]: I0318 13:08:11.334509 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a01c92f5-7938-437d-8262-11598bd8023c" volumeName="kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-config" seLinuxMountContext="" Mar 18 13:08:11.334563 master-0 kubenswrapper[7146]: I0318 13:08:11.334538 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2c4572e-0b38-4db1-96e5-6a35e29048e7" volumeName="kubernetes.io/secret/c2c4572e-0b38-4db1-96e5-6a35e29048e7-serving-cert" seLinuxMountContext="" Mar 18 13:08:11.334563 master-0 kubenswrapper[7146]: I0318 13:08:11.334554 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cb471665-2b07-48df-9881-3fb663390b23" volumeName="kubernetes.io/secret/cb471665-2b07-48df-9881-3fb663390b23-serving-cert" seLinuxMountContext="" Mar 18 13:08:11.334668 master-0 kubenswrapper[7146]: I0318 13:08:11.334566 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da6a763d-2777-40c4-ae1f-c77ced406ea2" volumeName="kubernetes.io/projected/da6a763d-2777-40c4-ae1f-c77ced406ea2-kube-api-access-lhqk9" seLinuxMountContext="" Mar 18 13:08:11.334668 master-0 kubenswrapper[7146]: I0318 13:08:11.334578 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0213214b-693b-411b-8254-48d7826011eb" volumeName="kubernetes.io/projected/0213214b-693b-411b-8254-48d7826011eb-kube-api-access-xcm8d" seLinuxMountContext="" Mar 18 13:08:11.334668 master-0 kubenswrapper[7146]: I0318 13:08:11.334589 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="162e25c0-761c-4414-8c29-f6931afdb7b2" volumeName="kubernetes.io/configmap/162e25c0-761c-4414-8c29-f6931afdb7b2-service-ca" seLinuxMountContext="" Mar 18 13:08:11.334668 master-0 kubenswrapper[7146]: I0318 13:08:11.334598 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20dc979a-732b-43b5-acc2-118e4c350470" volumeName="kubernetes.io/projected/20dc979a-732b-43b5-acc2-118e4c350470-kube-api-access-wnvfd" seLinuxMountContext="" Mar 18 13:08:11.334668 master-0 kubenswrapper[7146]: I0318 13:08:11.334609 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="369e9689-e2f6-4276-b096-8db094f8d6ae" volumeName="kubernetes.io/projected/369e9689-e2f6-4276-b096-8db094f8d6ae-kube-api-access-crbvx" seLinuxMountContext="" Mar 18 13:08:11.334668 master-0 kubenswrapper[7146]: I0318 13:08:11.334619 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46ae7b31-c91c-477e-a04a-a3a8541747be" volumeName="kubernetes.io/projected/46ae7b31-c91c-477e-a04a-a3a8541747be-kube-api-access-zwsns" seLinuxMountContext="" Mar 18 13:08:11.334668 master-0 kubenswrapper[7146]: I0318 13:08:11.334629 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0213214b-693b-411b-8254-48d7826011eb" volumeName="kubernetes.io/empty-dir/0213214b-693b-411b-8254-48d7826011eb-available-featuregates" seLinuxMountContext="" Mar 18 13:08:11.334668 master-0 kubenswrapper[7146]: I0318 13:08:11.334643 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="83a4f641-d28f-42aa-a228-f6086d720fe4" volumeName="kubernetes.io/configmap/83a4f641-d28f-42aa-a228-f6086d720fe4-config" seLinuxMountContext="" Mar 18 13:08:11.334668 master-0 kubenswrapper[7146]: I0318 13:08:11.334652 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9a9baa5-9334-47dc-8d0c-eafc96a679b3" volumeName="kubernetes.io/configmap/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-config" seLinuxMountContext="" Mar 18 13:08:11.334668 master-0 kubenswrapper[7146]: I0318 13:08:11.334661 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf0ea4e-8b08-488f-b252-39580f46b756" volumeName="kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-serving-cert" seLinuxMountContext="" Mar 18 13:08:11.334668 master-0 kubenswrapper[7146]: I0318 13:08:11.334671 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20dc979a-732b-43b5-acc2-118e4c350470" volumeName="kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-env-overrides" seLinuxMountContext="" Mar 18 13:08:11.334992 master-0 kubenswrapper[7146]: I0318 13:08:11.334702 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3c24b6e2-965b-4b4f-ad65-ded7b3cc3971" volumeName="kubernetes.io/configmap/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-iptables-alerter-script" seLinuxMountContext="" Mar 18 13:08:11.334992 master-0 kubenswrapper[7146]: I0318 13:08:11.334715 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4086d06f-d50e-4632-9da7-508909429eef" volumeName="kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-multus-daemon-config" seLinuxMountContext="" Mar 18 13:08:11.334992 master-0 kubenswrapper[7146]: I0318 13:08:11.334724 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46ae7b31-c91c-477e-a04a-a3a8541747be" volumeName="kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-binary-copy" seLinuxMountContext="" Mar 18 13:08:11.335566 master-0 kubenswrapper[7146]: I0318 13:08:11.335482 7146 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 13:08:11.335735 master-0 kubenswrapper[7146]: I0318 13:08:11.335702 7146 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 13:08:11.336494 master-0 kubenswrapper[7146]: I0318 13:08:11.336452 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bc77989-ecfc-4500-92a0-18c2b3b78408" volumeName="kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovnkube-config" seLinuxMountContext="" Mar 18 13:08:11.336568 master-0 kubenswrapper[7146]: I0318 13:08:11.336518 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a01c92f5-7938-437d-8262-11598bd8023c" volumeName="kubernetes.io/projected/a01c92f5-7938-437d-8262-11598bd8023c-kube-api-access-qc69w" seLinuxMountContext="" Mar 18 13:08:11.336568 master-0 kubenswrapper[7146]: I0318 13:08:11.336539 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2b92a53-0b61-4e1d-a306-f9a498e48b38" volumeName="kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-kube-api-access-j5mgr" seLinuxMountContext="" Mar 18 13:08:11.336568 master-0 kubenswrapper[7146]: I0318 13:08:11.336560 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad580a2-7f58-4d66-adad-0a53d9777655" volumeName="kubernetes.io/configmap/1ad580a2-7f58-4d66-adad-0a53d9777655-config" seLinuxMountContext="" Mar 18 13:08:11.336683 master-0 kubenswrapper[7146]: I0318 13:08:11.336579 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20dc979a-732b-43b5-acc2-118e4c350470" volumeName="kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-script-lib" seLinuxMountContext="" Mar 18 13:08:11.336726 master-0 kubenswrapper[7146]: I0318 13:08:11.336706 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73c93ee3-cf14-4fea-b2a7-ccfb56e55be4" volumeName="kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-bound-sa-token" seLinuxMountContext="" Mar 18 13:08:11.336765 master-0 kubenswrapper[7146]: I0318 13:08:11.336727 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ce8e99d-7b02-4bf4-a438-adde851918cb" volumeName="kubernetes.io/secret/8ce8e99d-7b02-4bf4-a438-adde851918cb-serving-cert" seLinuxMountContext="" Mar 18 13:08:11.336855 master-0 kubenswrapper[7146]: I0318 13:08:11.336771 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2c4572e-0b38-4db1-96e5-6a35e29048e7" volumeName="kubernetes.io/configmap/c2c4572e-0b38-4db1-96e5-6a35e29048e7-config" seLinuxMountContext="" Mar 18 13:08:11.336969 master-0 kubenswrapper[7146]: I0318 13:08:11.336858 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2b92a53-0b61-4e1d-a306-f9a498e48b38" volumeName="kubernetes.io/configmap/f2b92a53-0b61-4e1d-a306-f9a498e48b38-trusted-ca" seLinuxMountContext="" Mar 18 13:08:11.337030 master-0 kubenswrapper[7146]: I0318 13:08:11.336968 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20dc979a-732b-43b5-acc2-118e4c350470" volumeName="kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-config" seLinuxMountContext="" Mar 18 13:08:11.337030 master-0 kubenswrapper[7146]: I0318 13:08:11.336987 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20dc979a-732b-43b5-acc2-118e4c350470" volumeName="kubernetes.io/secret/20dc979a-732b-43b5-acc2-118e4c350470-ovn-node-metrics-cert" seLinuxMountContext="" Mar 18 13:08:11.337030 master-0 kubenswrapper[7146]: I0318 13:08:11.336998 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="906c0fd3-3bcd-4c6c-8505-b3517bae06b4" volumeName="kubernetes.io/projected/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-kube-api-access-rgh46" seLinuxMountContext="" Mar 18 13:08:11.337136 master-0 kubenswrapper[7146]: I0318 13:08:11.337013 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2f2982b-2117-4c16-a4e3-f7e14c7ddc41" volumeName="kubernetes.io/configmap/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-config" seLinuxMountContext="" Mar 18 13:08:11.337136 master-0 kubenswrapper[7146]: I0318 13:08:11.337098 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2f2982b-2117-4c16-a4e3-f7e14c7ddc41" volumeName="kubernetes.io/projected/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-kube-api-access" seLinuxMountContext="" Mar 18 13:08:11.337136 master-0 kubenswrapper[7146]: I0318 13:08:11.337112 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee1eb80b-5a76-443f-a534-54d5bdc0c98a" volumeName="kubernetes.io/configmap/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-telemetry-config" seLinuxMountContext="" Mar 18 13:08:11.337248 master-0 kubenswrapper[7146]: I0318 13:08:11.337124 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16a930da-d793-486f-bcef-cf042d3c427d" volumeName="kubernetes.io/secret/16a930da-d793-486f-bcef-cf042d3c427d-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 18 13:08:11.337322 master-0 kubenswrapper[7146]: I0318 13:08:11.337295 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf0ea4e-8b08-488f-b252-39580f46b756" volumeName="kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-client" seLinuxMountContext="" Mar 18 13:08:11.337372 master-0 kubenswrapper[7146]: I0318 13:08:11.337326 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="93ea3c78-dede-468f-89a5-551133f794c5" volumeName="kubernetes.io/configmap/93ea3c78-dede-468f-89a5-551133f794c5-config" seLinuxMountContext="" Mar 18 13:08:11.337440 master-0 kubenswrapper[7146]: I0318 13:08:11.337415 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb8907fd-35dd-452a-8032-f2f95a6e553a" volumeName="kubernetes.io/projected/eb8907fd-35dd-452a-8032-f2f95a6e553a-kube-api-access-k254v" seLinuxMountContext="" Mar 18 13:08:11.337491 master-0 kubenswrapper[7146]: I0318 13:08:11.337445 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee1eb80b-5a76-443f-a534-54d5bdc0c98a" volumeName="kubernetes.io/projected/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-kube-api-access-qvxs4" seLinuxMountContext="" Mar 18 13:08:11.337491 master-0 kubenswrapper[7146]: I0318 13:08:11.337459 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf0ea4e-8b08-488f-b252-39580f46b756" volumeName="kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-ca" seLinuxMountContext="" Mar 18 13:08:11.337566 master-0 kubenswrapper[7146]: I0318 13:08:11.337493 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2cad2401-dab1-49f7-870e-a742ebfe323f" volumeName="kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7" seLinuxMountContext="" Mar 18 13:08:11.337566 master-0 kubenswrapper[7146]: I0318 13:08:11.337509 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3c24b6e2-965b-4b4f-ad65-ded7b3cc3971" volumeName="kubernetes.io/projected/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-kube-api-access-qwfnk" seLinuxMountContext="" Mar 18 13:08:11.337566 master-0 kubenswrapper[7146]: I0318 13:08:11.337521 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bc77989-ecfc-4500-92a0-18c2b3b78408" volumeName="kubernetes.io/projected/4bc77989-ecfc-4500-92a0-18c2b3b78408-kube-api-access-brvlj" seLinuxMountContext="" Mar 18 13:08:11.337566 master-0 kubenswrapper[7146]: I0318 13:08:11.337555 7146 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 13:08:11.337703 master-0 kubenswrapper[7146]: I0318 13:08:11.337556 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bccf60c-5b07-4f40-8430-12bfb62661c7" volumeName="kubernetes.io/projected/5bccf60c-5b07-4f40-8430-12bfb62661c7-kube-api-access-4b6rn" seLinuxMountContext="" Mar 18 13:08:11.341517 master-0 kubenswrapper[7146]: I0318 13:08:11.341408 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9a9baa5-9334-47dc-8d0c-eafc96a679b3" volumeName="kubernetes.io/secret/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-serving-cert" seLinuxMountContext="" Mar 18 13:08:11.341636 master-0 kubenswrapper[7146]: I0318 13:08:11.341510 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2c4572e-0b38-4db1-96e5-6a35e29048e7" volumeName="kubernetes.io/projected/c2c4572e-0b38-4db1-96e5-6a35e29048e7-kube-api-access" seLinuxMountContext="" Mar 18 13:08:11.341636 master-0 kubenswrapper[7146]: I0318 13:08:11.341560 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0213214b-693b-411b-8254-48d7826011eb" volumeName="kubernetes.io/secret/0213214b-693b-411b-8254-48d7826011eb-serving-cert" seLinuxMountContext="" Mar 18 13:08:11.341636 master-0 kubenswrapper[7146]: I0318 13:08:11.341576 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf0ea4e-8b08-488f-b252-39580f46b756" volumeName="kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-service-ca" seLinuxMountContext="" Mar 18 13:08:11.341636 master-0 kubenswrapper[7146]: I0318 13:08:11.341629 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="330df925-8429-4b96-9bfe-caa017c21afa" volumeName="kubernetes.io/projected/330df925-8429-4b96-9bfe-caa017c21afa-kube-api-access-2sqzx" seLinuxMountContext="" Mar 18 13:08:11.341830 master-0 kubenswrapper[7146]: I0318 13:08:11.341654 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47f82c03-65d1-4a6c-ba09-8a00ae778009" volumeName="kubernetes.io/projected/47f82c03-65d1-4a6c-ba09-8a00ae778009-kube-api-access-ghzrb" seLinuxMountContext="" Mar 18 13:08:11.341830 master-0 kubenswrapper[7146]: I0318 13:08:11.341672 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bc77989-ecfc-4500-92a0-18c2b3b78408" volumeName="kubernetes.io/secret/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 18 13:08:11.341830 master-0 kubenswrapper[7146]: I0318 13:08:11.341695 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a0944d2-d99a-42eb-81f5-a212b750b8f4" volumeName="kubernetes.io/projected/8a0944d2-d99a-42eb-81f5-a212b750b8f4-kube-api-access-882b8" seLinuxMountContext="" Mar 18 13:08:11.341830 master-0 kubenswrapper[7146]: I0318 13:08:11.341713 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a0944d2-d99a-42eb-81f5-a212b750b8f4" volumeName="kubernetes.io/secret/8a0944d2-d99a-42eb-81f5-a212b750b8f4-metrics-tls" seLinuxMountContext="" Mar 18 13:08:11.341830 master-0 kubenswrapper[7146]: I0318 13:08:11.341765 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36db10b8-33a2-4b54-85e2-9809eb6bc37d" volumeName="kubernetes.io/projected/36db10b8-33a2-4b54-85e2-9809eb6bc37d-kube-api-access-bkdqs" seLinuxMountContext="" Mar 18 13:08:11.341830 master-0 kubenswrapper[7146]: I0318 13:08:11.341779 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46ae7b31-c91c-477e-a04a-a3a8541747be" volumeName="kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-sysctl-allowlist" seLinuxMountContext="" Mar 18 13:08:11.341830 master-0 kubenswrapper[7146]: I0318 13:08:11.341798 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e691486-8540-4b79-8eed-b0fb829071db" volumeName="kubernetes.io/projected/5e691486-8540-4b79-8eed-b0fb829071db-kube-api-access-lpl28" seLinuxMountContext="" Mar 18 13:08:11.341830 master-0 kubenswrapper[7146]: I0318 13:08:11.341811 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73c93ee3-cf14-4fea-b2a7-ccfb56e55be4" volumeName="kubernetes.io/configmap/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-trusted-ca" seLinuxMountContext="" Mar 18 13:08:11.341830 master-0 kubenswrapper[7146]: I0318 13:08:11.341828 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2f2982b-2117-4c16-a4e3-f7e14c7ddc41" volumeName="kubernetes.io/secret/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-serving-cert" seLinuxMountContext="" Mar 18 13:08:11.342239 master-0 kubenswrapper[7146]: I0318 13:08:11.341843 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16a930da-d793-486f-bcef-cf042d3c427d" volumeName="kubernetes.io/empty-dir/16a930da-d793-486f-bcef-cf042d3c427d-operand-assets" seLinuxMountContext="" Mar 18 13:08:11.342239 master-0 kubenswrapper[7146]: I0318 13:08:11.341856 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf0ea4e-8b08-488f-b252-39580f46b756" volumeName="kubernetes.io/projected/1bf0ea4e-8b08-488f-b252-39580f46b756-kube-api-access-4mlkj" seLinuxMountContext="" Mar 18 13:08:11.342239 master-0 kubenswrapper[7146]: I0318 13:08:11.341873 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a01c92f5-7938-437d-8262-11598bd8023c" volumeName="kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-images" seLinuxMountContext="" Mar 18 13:08:11.342239 master-0 kubenswrapper[7146]: I0318 13:08:11.341929 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9a9baa5-9334-47dc-8d0c-eafc96a679b3" volumeName="kubernetes.io/projected/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-kube-api-access-z9tzl" seLinuxMountContext="" Mar 18 13:08:11.342239 master-0 kubenswrapper[7146]: I0318 13:08:11.341961 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb8907fd-35dd-452a-8032-f2f95a6e553a" volumeName="kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-env-overrides" seLinuxMountContext="" Mar 18 13:08:11.342239 master-0 kubenswrapper[7146]: I0318 13:08:11.341982 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb8907fd-35dd-452a-8032-f2f95a6e553a" volumeName="kubernetes.io/secret/eb8907fd-35dd-452a-8032-f2f95a6e553a-webhook-cert" seLinuxMountContext="" Mar 18 13:08:11.342239 master-0 kubenswrapper[7146]: I0318 13:08:11.342014 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="330df925-8429-4b96-9bfe-caa017c21afa" volumeName="kubernetes.io/configmap/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-trusted-ca" seLinuxMountContext="" Mar 18 13:08:11.342239 master-0 kubenswrapper[7146]: I0318 13:08:11.342034 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4086d06f-d50e-4632-9da7-508909429eef" volumeName="kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-cni-binary-copy" seLinuxMountContext="" Mar 18 13:08:11.342239 master-0 kubenswrapper[7146]: I0318 13:08:11.342049 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cb471665-2b07-48df-9881-3fb663390b23" volumeName="kubernetes.io/projected/cb471665-2b07-48df-9881-3fb663390b23-kube-api-access-6f8xk" seLinuxMountContext="" Mar 18 13:08:11.342239 master-0 kubenswrapper[7146]: I0318 13:08:11.342061 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb8907fd-35dd-452a-8032-f2f95a6e553a" volumeName="kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-ovnkube-identity-cm" seLinuxMountContext="" Mar 18 13:08:11.342239 master-0 kubenswrapper[7146]: I0318 13:08:11.342079 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad580a2-7f58-4d66-adad-0a53d9777655" volumeName="kubernetes.io/projected/1ad580a2-7f58-4d66-adad-0a53d9777655-kube-api-access-cw64j" seLinuxMountContext="" Mar 18 13:08:11.342239 master-0 kubenswrapper[7146]: I0318 13:08:11.342113 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ce8e99d-7b02-4bf4-a438-adde851918cb" volumeName="kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-trusted-ca-bundle" seLinuxMountContext="" Mar 18 13:08:11.342239 master-0 kubenswrapper[7146]: I0318 13:08:11.342133 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ce8e99d-7b02-4bf4-a438-adde851918cb" volumeName="kubernetes.io/projected/8ce8e99d-7b02-4bf4-a438-adde851918cb-kube-api-access-r8dfw" seLinuxMountContext="" Mar 18 13:08:11.342239 master-0 kubenswrapper[7146]: I0318 13:08:11.342182 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="369e9689-e2f6-4276-b096-8db094f8d6ae" volumeName="kubernetes.io/configmap/369e9689-e2f6-4276-b096-8db094f8d6ae-trusted-ca" seLinuxMountContext="" Mar 18 13:08:11.342687 master-0 kubenswrapper[7146]: I0318 13:08:11.342196 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46ae7b31-c91c-477e-a04a-a3a8541747be" volumeName="kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 18 13:08:11.342687 master-0 kubenswrapper[7146]: I0318 13:08:11.342363 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ce8e99d-7b02-4bf4-a438-adde851918cb" volumeName="kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-config" seLinuxMountContext="" Mar 18 13:08:11.342687 master-0 kubenswrapper[7146]: I0318 13:08:11.342378 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16a930da-d793-486f-bcef-cf042d3c427d" volumeName="kubernetes.io/projected/16a930da-d793-486f-bcef-cf042d3c427d-kube-api-access-5gv8b" seLinuxMountContext="" Mar 18 13:08:11.342687 master-0 kubenswrapper[7146]: I0318 13:08:11.342398 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf0ea4e-8b08-488f-b252-39580f46b756" volumeName="kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-config" seLinuxMountContext="" Mar 18 13:08:11.342687 master-0 kubenswrapper[7146]: I0318 13:08:11.342411 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="83a4f641-d28f-42aa-a228-f6086d720fe4" volumeName="kubernetes.io/projected/83a4f641-d28f-42aa-a228-f6086d720fe4-kube-api-access-9hb2q" seLinuxMountContext="" Mar 18 13:08:11.342687 master-0 kubenswrapper[7146]: I0318 13:08:11.342498 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="162e25c0-761c-4414-8c29-f6931afdb7b2" volumeName="kubernetes.io/projected/162e25c0-761c-4414-8c29-f6931afdb7b2-kube-api-access" seLinuxMountContext="" Mar 18 13:08:11.342687 master-0 kubenswrapper[7146]: I0318 13:08:11.342546 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad580a2-7f58-4d66-adad-0a53d9777655" volumeName="kubernetes.io/secret/1ad580a2-7f58-4d66-adad-0a53d9777655-serving-cert" seLinuxMountContext="" Mar 18 13:08:11.342687 master-0 kubenswrapper[7146]: I0318 13:08:11.342561 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bc77989-ecfc-4500-92a0-18c2b3b78408" volumeName="kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-env-overrides" seLinuxMountContext="" Mar 18 13:08:11.342687 master-0 kubenswrapper[7146]: I0318 13:08:11.342612 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="83a4f641-d28f-42aa-a228-f6086d720fe4" volumeName="kubernetes.io/secret/83a4f641-d28f-42aa-a228-f6086d720fe4-serving-cert" seLinuxMountContext="" Mar 18 13:08:11.342687 master-0 kubenswrapper[7146]: I0318 13:08:11.342627 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="93ea3c78-dede-468f-89a5-551133f794c5" volumeName="kubernetes.io/secret/93ea3c78-dede-468f-89a5-551133f794c5-serving-cert" seLinuxMountContext="" Mar 18 13:08:11.342687 master-0 kubenswrapper[7146]: I0318 13:08:11.342640 7146 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2b92a53-0b61-4e1d-a306-f9a498e48b38" volumeName="kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-bound-sa-token" seLinuxMountContext="" Mar 18 13:08:11.342687 master-0 kubenswrapper[7146]: I0318 13:08:11.342652 7146 reconstruct.go:97] "Volume reconstruction finished" Mar 18 13:08:11.342687 master-0 kubenswrapper[7146]: I0318 13:08:11.342665 7146 reconciler.go:26] "Reconciler: start to sync state" Mar 18 13:08:11.355472 master-0 kubenswrapper[7146]: I0318 13:08:11.355246 7146 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 13:08:11.356798 master-0 kubenswrapper[7146]: I0318 13:08:11.356774 7146 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 13:08:11.356881 master-0 kubenswrapper[7146]: I0318 13:08:11.356818 7146 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 13:08:11.356881 master-0 kubenswrapper[7146]: I0318 13:08:11.356843 7146 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 13:08:11.357015 master-0 kubenswrapper[7146]: E0318 13:08:11.356984 7146 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 13:08:11.358594 master-0 kubenswrapper[7146]: I0318 13:08:11.358569 7146 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 13:08:11.365740 master-0 kubenswrapper[7146]: I0318 13:08:11.365559 7146 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="dceb07db18c0d8faeb0249820c09e2ecee50c97d0f9fd01d9a209e9a350fd96e" exitCode=0 Mar 18 13:08:11.385775 master-0 kubenswrapper[7146]: I0318 13:08:11.385739 7146 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 18 13:08:11.403093 master-0 kubenswrapper[7146]: I0318 13:08:11.403055 7146 generic.go:334] "Generic (PLEG): container finished" podID="20dc979a-732b-43b5-acc2-118e4c350470" containerID="25dc4f55701fc072574e9fbf9afecda3f3ce7724cd8af5190b0641c9037070fb" exitCode=0 Mar 18 13:08:11.414166 master-0 kubenswrapper[7146]: I0318 13:08:11.414130 7146 generic.go:334] "Generic (PLEG): container finished" podID="10c6ab19-9232-47bd-95da-136641cc3f2d" containerID="e5d871ce15c246b83610b31f823caa6e0c2380ca2682febc8546add0e167eb72" exitCode=0 Mar 18 13:08:11.415675 master-0 kubenswrapper[7146]: I0318 13:08:11.415640 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 13:08:11.416019 master-0 kubenswrapper[7146]: I0318 13:08:11.415991 7146 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="0cd8e6f707daec133a61d96c86c0549a9d428fdbed2e6e436300d058f6e9d875" exitCode=1 Mar 18 13:08:11.416019 master-0 kubenswrapper[7146]: I0318 13:08:11.416017 7146 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="869e4216741d6f450122345795f65e862d784b38e4a915e11371713c52cf93a3" exitCode=0 Mar 18 13:08:11.423294 master-0 kubenswrapper[7146]: I0318 13:08:11.423265 7146 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="ab5b83d779ab6537d0a99adbe63763b23469f75fb94b22198d32842d6404c007" exitCode=0 Mar 18 13:08:11.423294 master-0 kubenswrapper[7146]: I0318 13:08:11.423289 7146 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="939081bad25da33d133eff9bd4c3f679efe60bd386467b9c7ea166c2edea2ccd" exitCode=0 Mar 18 13:08:11.423294 master-0 kubenswrapper[7146]: I0318 13:08:11.423298 7146 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="5308c4990ee617dab17b794620acded12b71b96d5a2e7a368924488be2073775" exitCode=0 Mar 18 13:08:11.423482 master-0 kubenswrapper[7146]: I0318 13:08:11.423307 7146 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="158c0af92fac11481577106174b03b386a7b412c2e448451da762deb74b713bd" exitCode=0 Mar 18 13:08:11.423482 master-0 kubenswrapper[7146]: I0318 13:08:11.423316 7146 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="9b8b0976c817ccd695886d1ba83ffcc31d11cd506356512ccbdf4d71a9024f68" exitCode=0 Mar 18 13:08:11.423482 master-0 kubenswrapper[7146]: I0318 13:08:11.423322 7146 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="3ae100b68292305eb4454b58c0f9a6577d27f65eaa549bd19854723db5585aee" exitCode=0 Mar 18 13:08:11.428411 master-0 kubenswrapper[7146]: I0318 13:08:11.428383 7146 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="eec5f6ca3a758062e499f6115be65dea726d3162ea11a793f6a93a0de501edcb" exitCode=1 Mar 18 13:08:11.431512 master-0 kubenswrapper[7146]: I0318 13:08:11.431467 7146 generic.go:334] "Generic (PLEG): container finished" podID="c0403564-f8d9-4d81-b9e3-d9028fe58590" containerID="6c9c61fe13233fc2963a22bc53cbe738d781d6a4794b40b0e2484f290dbd30f4" exitCode=0 Mar 18 13:08:11.457192 master-0 kubenswrapper[7146]: E0318 13:08:11.457143 7146 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 13:08:11.470256 master-0 kubenswrapper[7146]: I0318 13:08:11.470222 7146 manager.go:324] Recovery completed Mar 18 13:08:11.504461 master-0 kubenswrapper[7146]: I0318 13:08:11.504431 7146 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 13:08:11.504716 master-0 kubenswrapper[7146]: I0318 13:08:11.504701 7146 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 13:08:11.504861 master-0 kubenswrapper[7146]: I0318 13:08:11.504850 7146 state_mem.go:36] "Initialized new in-memory state store" Mar 18 13:08:11.505305 master-0 kubenswrapper[7146]: I0318 13:08:11.505289 7146 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 18 13:08:11.505450 master-0 kubenswrapper[7146]: I0318 13:08:11.505422 7146 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 18 13:08:11.505548 master-0 kubenswrapper[7146]: I0318 13:08:11.505537 7146 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 18 13:08:11.505635 master-0 kubenswrapper[7146]: I0318 13:08:11.505625 7146 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 18 13:08:11.505722 master-0 kubenswrapper[7146]: I0318 13:08:11.505713 7146 policy_none.go:49] "None policy: Start" Mar 18 13:08:11.507218 master-0 kubenswrapper[7146]: I0318 13:08:11.507192 7146 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 13:08:11.507289 master-0 kubenswrapper[7146]: I0318 13:08:11.507222 7146 state_mem.go:35] "Initializing new in-memory state store" Mar 18 13:08:11.507442 master-0 kubenswrapper[7146]: I0318 13:08:11.507423 7146 state_mem.go:75] "Updated machine memory state" Mar 18 13:08:11.507442 master-0 kubenswrapper[7146]: I0318 13:08:11.507438 7146 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 18 13:08:11.516527 master-0 kubenswrapper[7146]: I0318 13:08:11.516500 7146 manager.go:334] "Starting Device Plugin manager" Mar 18 13:08:11.516742 master-0 kubenswrapper[7146]: I0318 13:08:11.516543 7146 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 13:08:11.516742 master-0 kubenswrapper[7146]: I0318 13:08:11.516559 7146 server.go:79] "Starting device plugin registration server" Mar 18 13:08:11.517049 master-0 kubenswrapper[7146]: I0318 13:08:11.517019 7146 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 13:08:11.517131 master-0 kubenswrapper[7146]: I0318 13:08:11.517039 7146 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 13:08:11.517475 master-0 kubenswrapper[7146]: I0318 13:08:11.517452 7146 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 13:08:11.517581 master-0 kubenswrapper[7146]: I0318 13:08:11.517543 7146 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 13:08:11.517581 master-0 kubenswrapper[7146]: I0318 13:08:11.517560 7146 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 13:08:11.631852 master-0 kubenswrapper[7146]: I0318 13:08:11.619111 7146 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:08:11.631852 master-0 kubenswrapper[7146]: I0318 13:08:11.621097 7146 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:08:11.631852 master-0 kubenswrapper[7146]: I0318 13:08:11.621132 7146 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:08:11.631852 master-0 kubenswrapper[7146]: I0318 13:08:11.621143 7146 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:08:11.631852 master-0 kubenswrapper[7146]: I0318 13:08:11.621193 7146 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:08:11.657370 master-0 kubenswrapper[7146]: I0318 13:08:11.657246 7146 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 13:08:11.657706 master-0 kubenswrapper[7146]: I0318 13:08:11.657600 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"7ddc54cddedd2bdae32224357d62187da26cebbd3a01e7a295c7e87fef85c020"} Mar 18 13:08:11.657766 master-0 kubenswrapper[7146]: I0318 13:08:11.657713 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"dceb07db18c0d8faeb0249820c09e2ecee50c97d0f9fd01d9a209e9a350fd96e"} Mar 18 13:08:11.657766 master-0 kubenswrapper[7146]: I0318 13:08:11.657730 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"a296b3a4304897ed7576fe44b518ce3d5fa743a93c59d520901b7af6be80a014"} Mar 18 13:08:11.657838 master-0 kubenswrapper[7146]: I0318 13:08:11.657780 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"40d2f52b6191fb64bc515d1f7e32cd3a0019730cc68c0ff9674d239a2fee21db"} Mar 18 13:08:11.657838 master-0 kubenswrapper[7146]: I0318 13:08:11.657792 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"9e0ad9e4d46022da9225ef1364382c88cb4b32388cd7035e1c00337bf6332812"} Mar 18 13:08:11.657838 master-0 kubenswrapper[7146]: I0318 13:08:11.657812 7146 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="564609a7495d28c1d563a48d2887cf8c20defd72a9874fce74c4c59b19ac7bdf" Mar 18 13:08:11.657838 master-0 kubenswrapper[7146]: I0318 13:08:11.657824 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"4bd355c34f8aa8d889ca1a40b947fb34311faee6233b1e449a1cc61917522f5b"} Mar 18 13:08:11.657838 master-0 kubenswrapper[7146]: I0318 13:08:11.657835 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"0cd8e6f707daec133a61d96c86c0549a9d428fdbed2e6e436300d058f6e9d875"} Mar 18 13:08:11.657989 master-0 kubenswrapper[7146]: I0318 13:08:11.657849 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"869e4216741d6f450122345795f65e862d784b38e4a915e11371713c52cf93a3"} Mar 18 13:08:11.657989 master-0 kubenswrapper[7146]: I0318 13:08:11.657861 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"80f499fc9da9c64215ee87b646e32e10113a3a485956cfb50964722faffe7405"} Mar 18 13:08:11.657989 master-0 kubenswrapper[7146]: I0318 13:08:11.657896 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"25e53d87fc10cbd1352f788562bb532f3ed8f0ccfa5cd8ec598184e45bd58b6c"} Mar 18 13:08:11.657989 master-0 kubenswrapper[7146]: I0318 13:08:11.657909 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333"} Mar 18 13:08:11.657989 master-0 kubenswrapper[7146]: I0318 13:08:11.657919 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"eec5f6ca3a758062e499f6115be65dea726d3162ea11a793f6a93a0de501edcb"} Mar 18 13:08:11.657989 master-0 kubenswrapper[7146]: I0318 13:08:11.657930 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"f8ae6d060a44d48f0a3c581d701c99ae6804b630252206cc7208922bed8db289"} Mar 18 13:08:11.657989 master-0 kubenswrapper[7146]: I0318 13:08:11.657955 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"5a3aca2c595b11887478c52a2dc852e00eca370600ab4c4c0f7434b0e3d7365e"} Mar 18 13:08:11.657989 master-0 kubenswrapper[7146]: I0318 13:08:11.657966 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"8c058013257c1127015215f369eac1266cd3afbabe83f1a844ab4b8fd221030c"} Mar 18 13:08:11.657989 master-0 kubenswrapper[7146]: I0318 13:08:11.657976 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"70245a400781d2a78e9a27a22733df63043f95251d557ab2f0c87663ff3421fb"} Mar 18 13:08:11.657989 master-0 kubenswrapper[7146]: I0318 13:08:11.657993 7146 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce97760530466dc4fab04d92ea3320ac86069f6a538466695591a4fec01d17ee" Mar 18 13:08:11.787884 master-0 kubenswrapper[7146]: I0318 13:08:11.787817 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:11.788129 master-0 kubenswrapper[7146]: I0318 13:08:11.788049 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:11.788129 master-0 kubenswrapper[7146]: I0318 13:08:11.788075 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:08:11.788129 master-0 kubenswrapper[7146]: I0318 13:08:11.788100 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:08:11.788129 master-0 kubenswrapper[7146]: I0318 13:08:11.788124 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:11.788305 master-0 kubenswrapper[7146]: I0318 13:08:11.788144 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:11.788305 master-0 kubenswrapper[7146]: I0318 13:08:11.788170 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:11.788305 master-0 kubenswrapper[7146]: I0318 13:08:11.788257 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:11.788305 master-0 kubenswrapper[7146]: I0318 13:08:11.788278 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:11.788408 master-0 kubenswrapper[7146]: I0318 13:08:11.788311 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:11.788408 master-0 kubenswrapper[7146]: I0318 13:08:11.788329 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:11.788408 master-0 kubenswrapper[7146]: I0318 13:08:11.788346 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:11.788408 master-0 kubenswrapper[7146]: I0318 13:08:11.788370 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:08:11.788408 master-0 kubenswrapper[7146]: I0318 13:08:11.788387 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:08:11.788408 master-0 kubenswrapper[7146]: I0318 13:08:11.788405 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:08:11.788557 master-0 kubenswrapper[7146]: I0318 13:08:11.788423 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:08:11.788557 master-0 kubenswrapper[7146]: I0318 13:08:11.788480 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:11.889684 master-0 kubenswrapper[7146]: I0318 13:08:11.889335 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:11.889684 master-0 kubenswrapper[7146]: I0318 13:08:11.889602 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:11.889684 master-0 kubenswrapper[7146]: I0318 13:08:11.889633 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:11.889988 master-0 kubenswrapper[7146]: I0318 13:08:11.889694 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:11.889988 master-0 kubenswrapper[7146]: I0318 13:08:11.889696 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:11.889988 master-0 kubenswrapper[7146]: I0318 13:08:11.889727 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:11.889988 master-0 kubenswrapper[7146]: I0318 13:08:11.889789 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:11.889988 master-0 kubenswrapper[7146]: I0318 13:08:11.889816 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:11.889988 master-0 kubenswrapper[7146]: I0318 13:08:11.889833 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:08:11.889988 master-0 kubenswrapper[7146]: I0318 13:08:11.889846 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:08:11.889988 master-0 kubenswrapper[7146]: I0318 13:08:11.889859 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:11.890198 master-0 kubenswrapper[7146]: I0318 13:08:11.889874 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:08:11.890198 master-0 kubenswrapper[7146]: I0318 13:08:11.890125 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:08:11.890198 master-0 kubenswrapper[7146]: I0318 13:08:11.890151 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:11.890198 master-0 kubenswrapper[7146]: I0318 13:08:11.890170 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:11.890307 master-0 kubenswrapper[7146]: I0318 13:08:11.890199 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:11.890307 master-0 kubenswrapper[7146]: I0318 13:08:11.890047 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:08:11.890307 master-0 kubenswrapper[7146]: I0318 13:08:11.890063 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:08:11.890307 master-0 kubenswrapper[7146]: I0318 13:08:11.890029 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:11.890307 master-0 kubenswrapper[7146]: I0318 13:08:11.889728 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:11.890307 master-0 kubenswrapper[7146]: I0318 13:08:11.889895 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:11.890307 master-0 kubenswrapper[7146]: I0318 13:08:11.890006 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:08:11.890307 master-0 kubenswrapper[7146]: I0318 13:08:11.890301 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:11.890546 master-0 kubenswrapper[7146]: I0318 13:08:11.890339 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:11.890546 master-0 kubenswrapper[7146]: I0318 13:08:11.890386 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:08:11.890546 master-0 kubenswrapper[7146]: I0318 13:08:11.890390 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:08:11.890546 master-0 kubenswrapper[7146]: I0318 13:08:11.890356 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:11.890546 master-0 kubenswrapper[7146]: I0318 13:08:11.890360 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:11.890546 master-0 kubenswrapper[7146]: I0318 13:08:11.890457 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:08:11.890546 master-0 kubenswrapper[7146]: I0318 13:08:11.890424 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:08:11.890546 master-0 kubenswrapper[7146]: I0318 13:08:11.890461 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:08:11.890546 master-0 kubenswrapper[7146]: I0318 13:08:11.890481 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:11.890546 master-0 kubenswrapper[7146]: I0318 13:08:11.890523 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:11.890835 master-0 kubenswrapper[7146]: I0318 13:08:11.890560 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:12.029157 master-0 kubenswrapper[7146]: I0318 13:08:12.029097 7146 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 18 13:08:12.029338 master-0 kubenswrapper[7146]: I0318 13:08:12.029223 7146 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 13:08:12.029408 master-0 kubenswrapper[7146]: E0318 13:08:12.029356 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:08:12.029476 master-0 kubenswrapper[7146]: E0318 13:08:12.029430 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:12.305685 master-0 kubenswrapper[7146]: I0318 13:08:12.305623 7146 apiserver.go:52] "Watching apiserver" Mar 18 13:08:12.351362 master-0 kubenswrapper[7146]: E0318 13:08:12.351298 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:08:12.351767 master-0 kubenswrapper[7146]: W0318 13:08:12.351718 7146 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 18 13:08:12.351844 master-0 kubenswrapper[7146]: E0318 13:08:12.351811 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:12.352189 master-0 kubenswrapper[7146]: E0318 13:08:12.352162 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:08:12.352521 master-0 kubenswrapper[7146]: I0318 13:08:12.352496 7146 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 13:08:12.355040 master-0 kubenswrapper[7146]: I0318 13:08:12.354162 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl","openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c","openshift-dns-operator/dns-operator-9c5679d8f-bqbzx","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8","openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-network-operator/iptables-alerter-tvnss","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz","openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm","openshift-etcd/etcd-master-0-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf","openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5","openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8","openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk","openshift-ovn-kubernetes/ovnkube-node-pfs29","openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5","assisted-installer/assisted-installer-controller-m2vzq","kube-system/bootstrap-kube-controller-manager-master-0","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg","openshift-network-operator/network-operator-7bd846bfc4-mk4d5","openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb","openshift-marketplace/marketplace-operator-89ccd998f-4v84b","openshift-multus/multus-additional-cni-plugins-xpppb","openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb","openshift-multus/network-metrics-daemon-kq2j4","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56","openshift-network-diagnostics/network-check-target-zlgkc","openshift-network-node-identity/network-node-identity-xcbtb","openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l","openshift-multus/multus-9bhww","openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42"] Mar 18 13:08:12.356040 master-0 kubenswrapper[7146]: I0318 13:08:12.356006 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:12.356109 master-0 kubenswrapper[7146]: I0318 13:08:12.356038 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:08:12.356382 master-0 kubenswrapper[7146]: I0318 13:08:12.356335 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:08:12.356382 master-0 kubenswrapper[7146]: I0318 13:08:12.356372 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:12.356779 master-0 kubenswrapper[7146]: I0318 13:08:12.356650 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:08:12.356779 master-0 kubenswrapper[7146]: I0318 13:08:12.356681 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:08:12.356779 master-0 kubenswrapper[7146]: I0318 13:08:12.356728 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:08:12.356779 master-0 kubenswrapper[7146]: I0318 13:08:12.356753 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:12.356779 master-0 kubenswrapper[7146]: I0318 13:08:12.356758 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:08:12.357178 master-0 kubenswrapper[7146]: I0318 13:08:12.357010 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:08:12.362386 master-0 kubenswrapper[7146]: I0318 13:08:12.362340 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:08:12.362680 master-0 kubenswrapper[7146]: I0318 13:08:12.362649 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:08:12.363507 master-0 kubenswrapper[7146]: I0318 13:08:12.363464 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:08:12.363611 master-0 kubenswrapper[7146]: I0318 13:08:12.363584 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 13:08:12.363658 master-0 kubenswrapper[7146]: I0318 13:08:12.363647 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 13:08:12.363727 master-0 kubenswrapper[7146]: I0318 13:08:12.363699 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 13:08:12.363727 master-0 kubenswrapper[7146]: I0318 13:08:12.363718 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 13:08:12.363921 master-0 kubenswrapper[7146]: I0318 13:08:12.363896 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 13:08:12.363997 master-0 kubenswrapper[7146]: I0318 13:08:12.363978 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 13:08:12.364036 master-0 kubenswrapper[7146]: I0318 13:08:12.364015 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 13:08:12.364036 master-0 kubenswrapper[7146]: I0318 13:08:12.364022 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:08:12.364158 master-0 kubenswrapper[7146]: I0318 13:08:12.364019 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 13:08:12.364195 master-0 kubenswrapper[7146]: I0318 13:08:12.364170 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 13:08:12.364195 master-0 kubenswrapper[7146]: I0318 13:08:12.364184 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 13:08:12.364195 master-0 kubenswrapper[7146]: I0318 13:08:12.364188 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 13:08:12.364287 master-0 kubenswrapper[7146]: I0318 13:08:12.364216 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 13:08:12.364287 master-0 kubenswrapper[7146]: I0318 13:08:12.364183 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 13:08:12.364287 master-0 kubenswrapper[7146]: I0318 13:08:12.364284 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 13:08:12.365429 master-0 kubenswrapper[7146]: I0318 13:08:12.365401 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 13:08:12.365491 master-0 kubenswrapper[7146]: I0318 13:08:12.365469 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 13:08:12.365539 master-0 kubenswrapper[7146]: I0318 13:08:12.365498 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 13:08:12.365576 master-0 kubenswrapper[7146]: I0318 13:08:12.365544 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 13:08:12.365616 master-0 kubenswrapper[7146]: I0318 13:08:12.365591 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 13:08:12.365616 master-0 kubenswrapper[7146]: I0318 13:08:12.365604 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 13:08:12.365616 master-0 kubenswrapper[7146]: I0318 13:08:12.365614 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 13:08:12.365723 master-0 kubenswrapper[7146]: I0318 13:08:12.365402 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 13:08:12.365723 master-0 kubenswrapper[7146]: I0318 13:08:12.365695 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 13:08:12.365723 master-0 kubenswrapper[7146]: I0318 13:08:12.365701 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 13:08:12.366094 master-0 kubenswrapper[7146]: I0318 13:08:12.366069 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 13:08:12.369184 master-0 kubenswrapper[7146]: I0318 13:08:12.368913 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 13:08:12.372664 master-0 kubenswrapper[7146]: I0318 13:08:12.372529 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 13:08:12.372664 master-0 kubenswrapper[7146]: I0318 13:08:12.372577 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 13:08:12.372853 master-0 kubenswrapper[7146]: I0318 13:08:12.372734 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 13:08:12.372853 master-0 kubenswrapper[7146]: I0318 13:08:12.372777 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 13:08:12.372926 master-0 kubenswrapper[7146]: I0318 13:08:12.372865 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 13:08:12.373603 master-0 kubenswrapper[7146]: I0318 13:08:12.373207 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 13:08:12.373603 master-0 kubenswrapper[7146]: I0318 13:08:12.373288 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 13:08:12.373603 master-0 kubenswrapper[7146]: I0318 13:08:12.373417 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 13:08:12.373603 master-0 kubenswrapper[7146]: I0318 13:08:12.373423 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 13:08:12.373603 master-0 kubenswrapper[7146]: I0318 13:08:12.373541 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 13:08:12.373603 master-0 kubenswrapper[7146]: I0318 13:08:12.373581 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 13:08:12.373850 master-0 kubenswrapper[7146]: I0318 13:08:12.373672 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 13:08:12.373850 master-0 kubenswrapper[7146]: I0318 13:08:12.373741 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:08:12.373850 master-0 kubenswrapper[7146]: I0318 13:08:12.373803 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 13:08:12.374082 master-0 kubenswrapper[7146]: I0318 13:08:12.373923 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 13:08:12.374082 master-0 kubenswrapper[7146]: I0318 13:08:12.373982 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 13:08:12.374082 master-0 kubenswrapper[7146]: I0318 13:08:12.374080 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 13:08:12.374416 master-0 kubenswrapper[7146]: I0318 13:08:12.374202 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 13:08:12.374416 master-0 kubenswrapper[7146]: I0318 13:08:12.374218 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 13:08:12.374416 master-0 kubenswrapper[7146]: I0318 13:08:12.374375 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 13:08:12.374416 master-0 kubenswrapper[7146]: I0318 13:08:12.374399 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 13:08:12.374579 master-0 kubenswrapper[7146]: I0318 13:08:12.374471 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 13:08:12.374579 master-0 kubenswrapper[7146]: I0318 13:08:12.374491 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 13:08:12.374649 master-0 kubenswrapper[7146]: I0318 13:08:12.374602 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 13:08:12.374649 master-0 kubenswrapper[7146]: I0318 13:08:12.374628 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 13:08:12.374844 master-0 kubenswrapper[7146]: I0318 13:08:12.374709 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 13:08:12.375124 master-0 kubenswrapper[7146]: I0318 13:08:12.374880 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 13:08:12.375124 master-0 kubenswrapper[7146]: I0318 13:08:12.373677 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 13:08:12.375124 master-0 kubenswrapper[7146]: I0318 13:08:12.374886 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.375257 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.375326 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.375384 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.375400 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.375423 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.375517 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.375525 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.375622 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.375624 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.375384 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.375692 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.375764 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.375332 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.375860 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.375882 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.376115 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.376176 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.376217 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.376337 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 13:08:12.377635 master-0 kubenswrapper[7146]: I0318 13:08:12.376514 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 13:08:12.379711 master-0 kubenswrapper[7146]: I0318 13:08:12.379672 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 13:08:12.383159 master-0 kubenswrapper[7146]: I0318 13:08:12.383118 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 13:08:12.383557 master-0 kubenswrapper[7146]: I0318 13:08:12.383522 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 13:08:12.383557 master-0 kubenswrapper[7146]: I0318 13:08:12.383545 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 13:08:12.383641 master-0 kubenswrapper[7146]: I0318 13:08:12.383601 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 13:08:12.383686 master-0 kubenswrapper[7146]: I0318 13:08:12.383622 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:08:12.387249 master-0 kubenswrapper[7146]: I0318 13:08:12.383730 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 13:08:12.387249 master-0 kubenswrapper[7146]: I0318 13:08:12.383774 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 13:08:12.387249 master-0 kubenswrapper[7146]: I0318 13:08:12.383829 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 13:08:12.387249 master-0 kubenswrapper[7146]: I0318 13:08:12.383871 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 13:08:12.387249 master-0 kubenswrapper[7146]: I0318 13:08:12.387063 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:08:12.387614 master-0 kubenswrapper[7146]: I0318 13:08:12.387419 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 13:08:12.389885 master-0 kubenswrapper[7146]: I0318 13:08:12.389848 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 13:08:12.394664 master-0 kubenswrapper[7146]: I0318 13:08:12.394624 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 13:08:12.396524 master-0 kubenswrapper[7146]: I0318 13:08:12.396410 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 13:08:12.396524 master-0 kubenswrapper[7146]: I0318 13:08:12.396495 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 13:08:12.397856 master-0 kubenswrapper[7146]: I0318 13:08:12.396890 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 13:08:12.402832 master-0 kubenswrapper[7146]: I0318 13:08:12.401296 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 13:08:12.402832 master-0 kubenswrapper[7146]: I0318 13:08:12.401521 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 13:08:12.402832 master-0 kubenswrapper[7146]: I0318 13:08:12.401726 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 13:08:12.403690 master-0 kubenswrapper[7146]: I0318 13:08:12.403651 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 13:08:12.403784 master-0 kubenswrapper[7146]: I0318 13:08:12.403758 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 13:08:12.404976 master-0 kubenswrapper[7146]: I0318 13:08:12.404733 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 13:08:12.404976 master-0 kubenswrapper[7146]: I0318 13:08:12.404767 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 13:08:12.405840 master-0 kubenswrapper[7146]: I0318 13:08:12.405057 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 13:08:12.405840 master-0 kubenswrapper[7146]: I0318 13:08:12.405078 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 13:08:12.418527 master-0 kubenswrapper[7146]: I0318 13:08:12.418488 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 13:08:12.423763 master-0 kubenswrapper[7146]: I0318 13:08:12.423687 7146 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 13:08:12.440121 master-0 kubenswrapper[7146]: I0318 13:08:12.440032 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 13:08:12.459071 master-0 kubenswrapper[7146]: I0318 13:08:12.458348 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 13:08:12.478876 master-0 kubenswrapper[7146]: I0318 13:08:12.478832 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 13:08:12.494641 master-0 kubenswrapper[7146]: I0318 13:08:12.494569 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/162e25c0-761c-4414-8c29-f6931afdb7b2-kube-api-access\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:12.494807 master-0 kubenswrapper[7146]: I0318 13:08:12.494759 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-os-release\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.494807 master-0 kubenswrapper[7146]: I0318 13:08:12.494793 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghzrb\" (UniqueName: \"kubernetes.io/projected/47f82c03-65d1-4a6c-ba09-8a00ae778009-kube-api-access-ghzrb\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:08:12.494899 master-0 kubenswrapper[7146]: I0318 13:08:12.494813 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:08:12.494899 master-0 kubenswrapper[7146]: I0318 13:08:12.494839 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-iptables-alerter-script\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:08:12.494899 master-0 kubenswrapper[7146]: I0318 13:08:12.494857 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:08:12.494899 master-0 kubenswrapper[7146]: I0318 13:08:12.494875 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:12.495031 master-0 kubenswrapper[7146]: I0318 13:08:12.494962 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-systemd-units\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.495063 master-0 kubenswrapper[7146]: I0318 13:08:12.495030 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-env-overrides\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:08:12.495122 master-0 kubenswrapper[7146]: I0318 13:08:12.495082 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2c4572e-0b38-4db1-96e5-6a35e29048e7-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:08:12.495187 master-0 kubenswrapper[7146]: I0318 13:08:12.495160 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-socket-dir-parent\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.495218 master-0 kubenswrapper[7146]: I0318 13:08:12.495197 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:08:12.495250 master-0 kubenswrapper[7146]: I0318 13:08:12.495226 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:08:12.495282 master-0 kubenswrapper[7146]: I0318 13:08:12.495251 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb8907fd-35dd-452a-8032-f2f95a6e553a-webhook-cert\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:08:12.495282 master-0 kubenswrapper[7146]: I0318 13:08:12.495274 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gv8b\" (UniqueName: \"kubernetes.io/projected/16a930da-d793-486f-bcef-cf042d3c427d-kube-api-access-5gv8b\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:08:12.495332 master-0 kubenswrapper[7146]: I0318 13:08:12.495295 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:08:12.495461 master-0 kubenswrapper[7146]: I0318 13:08:12.495433 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2c4572e-0b38-4db1-96e5-6a35e29048e7-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:08:12.495682 master-0 kubenswrapper[7146]: I0318 13:08:12.495651 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:08:12.495682 master-0 kubenswrapper[7146]: I0318 13:08:12.495668 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb8907fd-35dd-452a-8032-f2f95a6e553a-webhook-cert\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:08:12.495760 master-0 kubenswrapper[7146]: I0318 13:08:12.495725 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9tzl\" (UniqueName: \"kubernetes.io/projected/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-kube-api-access-z9tzl\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:08:12.495760 master-0 kubenswrapper[7146]: I0318 13:08:12.495747 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20dc979a-732b-43b5-acc2-118e4c350470-ovn-node-metrics-cert\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.495819 master-0 kubenswrapper[7146]: I0318 13:08:12.495766 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-cnibin\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.495819 master-0 kubenswrapper[7146]: I0318 13:08:12.495782 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-config\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:08:12.495819 master-0 kubenswrapper[7146]: I0318 13:08:12.495798 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ad580a2-7f58-4d66-adad-0a53d9777655-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:08:12.495819 master-0 kubenswrapper[7146]: I0318 13:08:12.495815 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-config\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:08:12.495918 master-0 kubenswrapper[7146]: I0318 13:08:12.495831 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-config\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:08:12.495918 master-0 kubenswrapper[7146]: I0318 13:08:12.495848 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgh46\" (UniqueName: \"kubernetes.io/projected/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-kube-api-access-rgh46\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:08:12.496031 master-0 kubenswrapper[7146]: I0318 13:08:12.496009 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ad580a2-7f58-4d66-adad-0a53d9777655-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:08:12.496062 master-0 kubenswrapper[7146]: I0318 13:08:12.496034 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-config\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:08:12.496115 master-0 kubenswrapper[7146]: I0318 13:08:12.496097 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-kubelet\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.496183 master-0 kubenswrapper[7146]: I0318 13:08:12.496121 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20dc979a-732b-43b5-acc2-118e4c350470-ovn-node-metrics-cert\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.496183 master-0 kubenswrapper[7146]: I0318 13:08:12.496130 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-config\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:08:12.496238 master-0 kubenswrapper[7146]: I0318 13:08:12.496136 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b6rn\" (UniqueName: \"kubernetes.io/projected/5bccf60c-5b07-4f40-8430-12bfb62661c7-kube-api-access-4b6rn\") pod \"csi-snapshot-controller-operator-5f5d689c6b-4s6b8\" (UID: \"5bccf60c-5b07-4f40-8430-12bfb62661c7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8" Mar 18 13:08:12.496238 master-0 kubenswrapper[7146]: I0318 13:08:12.496214 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvxs4\" (UniqueName: \"kubernetes.io/projected/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-kube-api-access-qvxs4\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:08:12.496238 master-0 kubenswrapper[7146]: I0318 13:08:12.496222 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-config\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:08:12.496238 master-0 kubenswrapper[7146]: I0318 13:08:12.496232 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-node-log\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.496340 master-0 kubenswrapper[7146]: I0318 13:08:12.496254 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:08:12.496340 master-0 kubenswrapper[7146]: I0318 13:08:12.496263 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5mgr\" (UniqueName: \"kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-kube-api-access-j5mgr\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:08:12.496340 master-0 kubenswrapper[7146]: I0318 13:08:12.496321 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-hostroot\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.496423 master-0 kubenswrapper[7146]: I0318 13:08:12.496349 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-etc-kubernetes\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.496423 master-0 kubenswrapper[7146]: I0318 13:08:12.496379 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/369e9689-e2f6-4276-b096-8db094f8d6ae-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:12.496423 master-0 kubenswrapper[7146]: I0318 13:08:12.496405 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-system-cni-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.496499 master-0 kubenswrapper[7146]: I0318 13:08:12.496433 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv9m7\" (UniqueName: \"kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7\") pod \"network-check-target-zlgkc\" (UID: \"2cad2401-dab1-49f7-870e-a742ebfe323f\") " pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:08:12.496499 master-0 kubenswrapper[7146]: I0318 13:08:12.496457 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mlkj\" (UniqueName: \"kubernetes.io/projected/1bf0ea4e-8b08-488f-b252-39580f46b756-kube-api-access-4mlkj\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:08:12.496553 master-0 kubenswrapper[7146]: I0318 13:08:12.496509 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:08:12.496553 master-0 kubenswrapper[7146]: I0318 13:08:12.496535 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad580a2-7f58-4d66-adad-0a53d9777655-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:08:12.496606 master-0 kubenswrapper[7146]: I0318 13:08:12.496561 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:08:12.496606 master-0 kubenswrapper[7146]: I0318 13:08:12.496590 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:12.496659 master-0 kubenswrapper[7146]: I0318 13:08:12.496617 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brvlj\" (UniqueName: \"kubernetes.io/projected/4bc77989-ecfc-4500-92a0-18c2b3b78408-kube-api-access-brvlj\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:08:12.497080 master-0 kubenswrapper[7146]: I0318 13:08:12.497055 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/369e9689-e2f6-4276-b096-8db094f8d6ae-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:12.497183 master-0 kubenswrapper[7146]: I0318 13:08:12.497145 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:08:12.497238 master-0 kubenswrapper[7146]: I0318 13:08:12.497217 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-system-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.497474 master-0 kubenswrapper[7146]: I0318 13:08:12.497426 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad580a2-7f58-4d66-adad-0a53d9777655-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:08:12.497636 master-0 kubenswrapper[7146]: I0318 13:08:12.497613 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a0944d2-d99a-42eb-81f5-a212b750b8f4-metrics-tls\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:08:12.497690 master-0 kubenswrapper[7146]: I0318 13:08:12.497634 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 13:08:12.497729 master-0 kubenswrapper[7146]: I0318 13:08:12.497681 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-882b8\" (UniqueName: \"kubernetes.io/projected/8a0944d2-d99a-42eb-81f5-a212b750b8f4-kube-api-access-882b8\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:08:12.497729 master-0 kubenswrapper[7146]: I0318 13:08:12.497716 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/16a930da-d793-486f-bcef-cf042d3c427d-operand-assets\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:08:12.497786 master-0 kubenswrapper[7146]: I0318 13:08:12.497751 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:08:12.497786 master-0 kubenswrapper[7146]: I0318 13:08:12.497761 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:08:12.497904 master-0 kubenswrapper[7146]: I0318 13:08:12.497881 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:08:12.497967 master-0 kubenswrapper[7146]: I0318 13:08:12.497927 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-log-socket\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.498040 master-0 kubenswrapper[7146]: I0318 13:08:12.498015 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:08:12.498085 master-0 kubenswrapper[7146]: I0318 13:08:12.498060 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:08:12.498085 master-0 kubenswrapper[7146]: I0318 13:08:12.498069 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/16a930da-d793-486f-bcef-cf042d3c427d-operand-assets\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:08:12.498085 master-0 kubenswrapper[7146]: I0318 13:08:12.498089 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-os-release\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.498202 master-0 kubenswrapper[7146]: I0318 13:08:12.498151 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0213214b-693b-411b-8254-48d7826011eb-serving-cert\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:08:12.498202 master-0 kubenswrapper[7146]: I0318 13:08:12.498065 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:08:12.498322 master-0 kubenswrapper[7146]: I0318 13:08:12.498297 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-ovn\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.498391 master-0 kubenswrapper[7146]: I0318 13:08:12.498330 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a0944d2-d99a-42eb-81f5-a212b750b8f4-metrics-tls\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:08:12.498431 master-0 kubenswrapper[7146]: I0318 13:08:12.498421 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93ea3c78-dede-468f-89a5-551133f794c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:08:12.498486 master-0 kubenswrapper[7146]: I0318 13:08:12.498452 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:08:12.498486 master-0 kubenswrapper[7146]: I0318 13:08:12.498476 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0213214b-693b-411b-8254-48d7826011eb-serving-cert\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:08:12.498577 master-0 kubenswrapper[7146]: I0318 13:08:12.498462 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ce8e99d-7b02-4bf4-a438-adde851918cb-serving-cert\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:08:12.498623 master-0 kubenswrapper[7146]: I0318 13:08:12.498576 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-conf-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.498623 master-0 kubenswrapper[7146]: I0318 13:08:12.498609 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-multus-daemon-config\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.498697 master-0 kubenswrapper[7146]: I0318 13:08:12.498635 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-config\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.498697 master-0 kubenswrapper[7146]: I0318 13:08:12.498656 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-ovnkube-identity-cm\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:08:12.498697 master-0 kubenswrapper[7146]: I0318 13:08:12.498666 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ce8e99d-7b02-4bf4-a438-adde851918cb-serving-cert\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:08:12.498697 master-0 kubenswrapper[7146]: I0318 13:08:12.498682 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-slash\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.498838 master-0 kubenswrapper[7146]: I0318 13:08:12.498737 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-var-lib-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.498838 master-0 kubenswrapper[7146]: I0318 13:08:12.498775 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-cnibin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.498838 master-0 kubenswrapper[7146]: I0318 13:08:12.498806 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8a0944d2-d99a-42eb-81f5-a212b750b8f4-host-etc-kube\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:08:12.498957 master-0 kubenswrapper[7146]: I0318 13:08:12.498839 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-images\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:12.498957 master-0 kubenswrapper[7146]: I0318 13:08:12.498871 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-serving-cert\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:08:12.498957 master-0 kubenswrapper[7146]: I0318 13:08:12.498902 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:08:12.498957 master-0 kubenswrapper[7146]: I0318 13:08:12.498919 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-ovnkube-identity-cm\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:08:12.499099 master-0 kubenswrapper[7146]: I0318 13:08:12.498958 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-env-overrides\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.499099 master-0 kubenswrapper[7146]: I0318 13:08:12.499004 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-k8s-cni-cncf-io\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.499099 master-0 kubenswrapper[7146]: I0318 13:08:12.499044 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k254v\" (UniqueName: \"kubernetes.io/projected/eb8907fd-35dd-452a-8032-f2f95a6e553a-kube-api-access-k254v\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:08:12.499099 master-0 kubenswrapper[7146]: I0318 13:08:12.499091 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4lx2\" (UniqueName: \"kubernetes.io/projected/4086d06f-d50e-4632-9da7-508909429eef-kube-api-access-w4lx2\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.499238 master-0 kubenswrapper[7146]: I0318 13:08:12.499138 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-config\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.499238 master-0 kubenswrapper[7146]: I0318 13:08:12.499141 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb471665-2b07-48df-9881-3fb663390b23-config\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:08:12.499238 master-0 kubenswrapper[7146]: I0318 13:08:12.499191 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.499343 master-0 kubenswrapper[7146]: I0318 13:08:12.499236 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:12.499343 master-0 kubenswrapper[7146]: I0318 13:08:12.499291 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8v5n\" (UniqueName: \"kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-kube-api-access-h8v5n\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:08:12.499343 master-0 kubenswrapper[7146]: I0318 13:08:12.499313 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-multus-daemon-config\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.499453 master-0 kubenswrapper[7146]: I0318 13:08:12.499344 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2c4572e-0b38-4db1-96e5-6a35e29048e7-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:08:12.499453 master-0 kubenswrapper[7146]: I0318 13:08:12.499408 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.499453 master-0 kubenswrapper[7146]: I0318 13:08:12.499440 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.499735 master-0 kubenswrapper[7146]: I0318 13:08:12.499712 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-serving-cert\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:08:12.499818 master-0 kubenswrapper[7146]: I0318 13:08:12.499762 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-images\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:12.499901 master-0 kubenswrapper[7146]: I0318 13:08:12.499881 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:08:12.500053 master-0 kubenswrapper[7146]: I0318 13:08:12.500029 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-multus\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.500138 master-0 kubenswrapper[7146]: I0318 13:08:12.500125 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb471665-2b07-48df-9881-3fb663390b23-config\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:08:12.500219 master-0 kubenswrapper[7146]: I0318 13:08:12.500190 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:08:12.500283 master-0 kubenswrapper[7146]: I0318 13:08:12.500218 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crbvx\" (UniqueName: \"kubernetes.io/projected/369e9689-e2f6-4276-b096-8db094f8d6ae-kube-api-access-crbvx\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:12.500283 master-0 kubenswrapper[7146]: I0318 13:08:12.500242 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-netd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.500283 master-0 kubenswrapper[7146]: I0318 13:08:12.500265 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8dfw\" (UniqueName: \"kubernetes.io/projected/8ce8e99d-7b02-4bf4-a438-adde851918cb-kube-api-access-r8dfw\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:08:12.500418 master-0 kubenswrapper[7146]: I0318 13:08:12.500322 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:08:12.500418 master-0 kubenswrapper[7146]: I0318 13:08:12.500345 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sqzx\" (UniqueName: \"kubernetes.io/projected/330df925-8429-4b96-9bfe-caa017c21afa-kube-api-access-2sqzx\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:08:12.500418 master-0 kubenswrapper[7146]: I0318 13:08:12.500345 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-env-overrides\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.500418 master-0 kubenswrapper[7146]: I0318 13:08:12.500368 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhqk9\" (UniqueName: \"kubernetes.io/projected/da6a763d-2777-40c4-ae1f-c77ced406ea2-kube-api-access-lhqk9\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:08:12.500418 master-0 kubenswrapper[7146]: I0318 13:08:12.500392 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:08:12.500625 master-0 kubenswrapper[7146]: I0318 13:08:12.500505 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:08:12.500625 master-0 kubenswrapper[7146]: I0318 13:08:12.500534 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f2b92a53-0b61-4e1d-a306-f9a498e48b38-trusted-ca\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:08:12.500625 master-0 kubenswrapper[7146]: I0318 13:08:12.500559 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-systemd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.500802 master-0 kubenswrapper[7146]: I0318 13:08:12.500776 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:08:12.500856 master-0 kubenswrapper[7146]: I0318 13:08:12.500844 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ea3c78-dede-468f-89a5-551133f794c5-config\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:08:12.500927 master-0 kubenswrapper[7146]: I0318 13:08:12.500880 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-kubelet\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.500927 master-0 kubenswrapper[7146]: I0318 13:08:12.500898 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f2b92a53-0b61-4e1d-a306-f9a498e48b38-trusted-ca\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:08:12.501067 master-0 kubenswrapper[7146]: I0318 13:08:12.501033 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f8xk\" (UniqueName: \"kubernetes.io/projected/cb471665-2b07-48df-9881-3fb663390b23-kube-api-access-6f8xk\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:08:12.501112 master-0 kubenswrapper[7146]: I0318 13:08:12.501067 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ea3c78-dede-468f-89a5-551133f794c5-config\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:08:12.501112 master-0 kubenswrapper[7146]: I0318 13:08:12.501073 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnvfd\" (UniqueName: \"kubernetes.io/projected/20dc979a-732b-43b5-acc2-118e4c350470-kube-api-access-wnvfd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.501250 master-0 kubenswrapper[7146]: I0318 13:08:12.501229 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:08:12.501292 master-0 kubenswrapper[7146]: I0318 13:08:12.501266 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-multus-certs\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.501337 master-0 kubenswrapper[7146]: I0318 13:08:12.501303 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkdqs\" (UniqueName: \"kubernetes.io/projected/36db10b8-33a2-4b54-85e2-9809eb6bc37d-kube-api-access-bkdqs\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:08:12.501379 master-0 kubenswrapper[7146]: I0318 13:08:12.501338 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hb2q\" (UniqueName: \"kubernetes.io/projected/83a4f641-d28f-42aa-a228-f6086d720fe4-kube-api-access-9hb2q\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:08:12.501379 master-0 kubenswrapper[7146]: I0318 13:08:12.501373 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-netns\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.501479 master-0 kubenswrapper[7146]: I0318 13:08:12.501456 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc69w\" (UniqueName: \"kubernetes.io/projected/a01c92f5-7938-437d-8262-11598bd8023c-kube-api-access-qc69w\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:12.501592 master-0 kubenswrapper[7146]: I0318 13:08:12.501549 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.501711 master-0 kubenswrapper[7146]: I0318 13:08:12.501683 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83a4f641-d28f-42aa-a228-f6086d720fe4-serving-cert\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:08:12.501753 master-0 kubenswrapper[7146]: I0318 13:08:12.501738 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83a4f641-d28f-42aa-a228-f6086d720fe4-config\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:08:12.501782 master-0 kubenswrapper[7146]: I0318 13:08:12.501769 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-etc-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.501810 master-0 kubenswrapper[7146]: I0318 13:08:12.501792 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-bin\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.501842 master-0 kubenswrapper[7146]: I0318 13:08:12.501812 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-host-slash\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:08:12.501895 master-0 kubenswrapper[7146]: I0318 13:08:12.501820 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83a4f641-d28f-42aa-a228-f6086d720fe4-serving-cert\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:08:12.502002 master-0 kubenswrapper[7146]: I0318 13:08:12.501974 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/16a930da-d793-486f-bcef-cf042d3c427d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:08:12.502068 master-0 kubenswrapper[7146]: I0318 13:08:12.502050 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.502097 master-0 kubenswrapper[7146]: I0318 13:08:12.502083 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-config\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:08:12.502202 master-0 kubenswrapper[7146]: I0318 13:08:12.502102 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0213214b-693b-411b-8254-48d7826011eb-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:08:12.502202 master-0 kubenswrapper[7146]: I0318 13:08:12.502125 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ea3c78-dede-468f-89a5-551133f794c5-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:08:12.502202 master-0 kubenswrapper[7146]: I0318 13:08:12.502145 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-bin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.502202 master-0 kubenswrapper[7146]: I0318 13:08:12.502189 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:12.502304 master-0 kubenswrapper[7146]: I0318 13:08:12.502226 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-client\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:08:12.502304 master-0 kubenswrapper[7146]: I0318 13:08:12.502255 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:12.502304 master-0 kubenswrapper[7146]: I0318 13:08:12.502261 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/16a930da-d793-486f-bcef-cf042d3c427d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:08:12.502304 master-0 kubenswrapper[7146]: I0318 13:08:12.502284 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwfnk\" (UniqueName: \"kubernetes.io/projected/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-kube-api-access-qwfnk\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:08:12.502304 master-0 kubenswrapper[7146]: I0318 13:08:12.502287 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.502433 master-0 kubenswrapper[7146]: I0318 13:08:12.502312 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-env-overrides\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:08:12.502433 master-0 kubenswrapper[7146]: I0318 13:08:12.502083 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83a4f641-d28f-42aa-a228-f6086d720fe4-config\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:08:12.502433 master-0 kubenswrapper[7146]: I0318 13:08:12.502365 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-config\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:08:12.502433 master-0 kubenswrapper[7146]: I0318 13:08:12.502366 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:08:12.502433 master-0 kubenswrapper[7146]: I0318 13:08:12.502433 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpl28\" (UniqueName: \"kubernetes.io/projected/5e691486-8540-4b79-8eed-b0fb829071db-kube-api-access-lpl28\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:08:12.502559 master-0 kubenswrapper[7146]: I0318 13:08:12.502454 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-env-overrides\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:08:12.502590 master-0 kubenswrapper[7146]: I0318 13:08:12.502580 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-bound-sa-token\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:08:12.502617 master-0 kubenswrapper[7146]: I0318 13:08:12.502598 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ea3c78-dede-468f-89a5-551133f794c5-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:08:12.502617 master-0 kubenswrapper[7146]: I0318 13:08:12.502614 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-config\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:12.502681 master-0 kubenswrapper[7146]: I0318 13:08:12.502634 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.502681 master-0 kubenswrapper[7146]: I0318 13:08:12.502637 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0213214b-693b-411b-8254-48d7826011eb-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:08:12.502681 master-0 kubenswrapper[7146]: I0318 13:08:12.502655 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-client\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:08:12.502785 master-0 kubenswrapper[7146]: I0318 13:08:12.502697 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.502836 master-0 kubenswrapper[7146]: I0318 13:08:12.502818 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.502870 master-0 kubenswrapper[7146]: I0318 13:08:12.502835 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-config\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:12.502870 master-0 kubenswrapper[7146]: I0318 13:08:12.502838 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.502930 master-0 kubenswrapper[7146]: I0318 13:08:12.502916 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/162e25c0-761c-4414-8c29-f6931afdb7b2-service-ca\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:12.502990 master-0 kubenswrapper[7146]: I0318 13:08:12.502972 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:08:12.503071 master-0 kubenswrapper[7146]: I0318 13:08:12.503043 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-ca\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:08:12.503229 master-0 kubenswrapper[7146]: I0318 13:08:12.503184 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/162e25c0-761c-4414-8c29-f6931afdb7b2-service-ca\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:12.503267 master-0 kubenswrapper[7146]: I0318 13:08:12.503232 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:08:12.503267 master-0 kubenswrapper[7146]: I0318 13:08:12.503255 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb471665-2b07-48df-9881-3fb663390b23-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:08:12.503319 master-0 kubenswrapper[7146]: I0318 13:08:12.503266 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-ca\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:08:12.503351 master-0 kubenswrapper[7146]: I0318 13:08:12.503318 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwsns\" (UniqueName: \"kubernetes.io/projected/46ae7b31-c91c-477e-a04a-a3a8541747be-kube-api-access-zwsns\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.503351 master-0 kubenswrapper[7146]: I0318 13:08:12.503337 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcm8d\" (UniqueName: \"kubernetes.io/projected/0213214b-693b-411b-8254-48d7826011eb-kube-api-access-xcm8d\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:08:12.503400 master-0 kubenswrapper[7146]: I0318 13:08:12.503372 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.503428 master-0 kubenswrapper[7146]: I0318 13:08:12.503400 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:08:12.503428 master-0 kubenswrapper[7146]: I0318 13:08:12.503416 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb471665-2b07-48df-9881-3fb663390b23-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:08:12.503550 master-0 kubenswrapper[7146]: I0318 13:08:12.503533 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:08:12.503602 master-0 kubenswrapper[7146]: I0318 13:08:12.503583 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c4572e-0b38-4db1-96e5-6a35e29048e7-config\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:08:12.503632 master-0 kubenswrapper[7146]: I0318 13:08:12.503422 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c4572e-0b38-4db1-96e5-6a35e29048e7-config\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:08:12.503676 master-0 kubenswrapper[7146]: I0318 13:08:12.503652 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw64j\" (UniqueName: \"kubernetes.io/projected/1ad580a2-7f58-4d66-adad-0a53d9777655-kube-api-access-cw64j\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:08:12.503705 master-0 kubenswrapper[7146]: I0318 13:08:12.503688 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-cni-binary-copy\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.503739 master-0 kubenswrapper[7146]: I0318 13:08:12.503723 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:08:12.503767 master-0 kubenswrapper[7146]: I0318 13:08:12.503746 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzblt\" (UniqueName: \"kubernetes.io/projected/35925474-e3fe-4cff-aad6-d853816618c7-kube-api-access-dzblt\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:08:12.503795 master-0 kubenswrapper[7146]: I0318 13:08:12.503787 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:12.503825 master-0 kubenswrapper[7146]: I0318 13:08:12.503807 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-netns\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.503856 master-0 kubenswrapper[7146]: I0318 13:08:12.503826 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:12.503903 master-0 kubenswrapper[7146]: I0318 13:08:12.503885 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-cni-binary-copy\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.503968 master-0 kubenswrapper[7146]: I0318 13:08:12.503931 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-script-lib\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.504885 master-0 kubenswrapper[7146]: I0318 13:08:12.504853 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:08:12.519559 master-0 kubenswrapper[7146]: I0318 13:08:12.519514 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 13:08:12.526224 master-0 kubenswrapper[7146]: I0318 13:08:12.526178 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-env-overrides\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:08:12.538238 master-0 kubenswrapper[7146]: I0318 13:08:12.538197 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 13:08:12.559209 master-0 kubenswrapper[7146]: I0318 13:08:12.559085 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 13:08:12.564870 master-0 kubenswrapper[7146]: I0318 13:08:12.564827 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-script-lib\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.578892 master-0 kubenswrapper[7146]: I0318 13:08:12.578857 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 13:08:12.585509 master-0 kubenswrapper[7146]: I0318 13:08:12.585253 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-iptables-alerter-script\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:08:12.605123 master-0 kubenswrapper[7146]: I0318 13:08:12.605068 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-node-log\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.605123 master-0 kubenswrapper[7146]: I0318 13:08:12.605121 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-hostroot\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.605123 master-0 kubenswrapper[7146]: I0318 13:08:12.605137 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-etc-kubernetes\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.605530 master-0 kubenswrapper[7146]: I0318 13:08:12.605159 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-system-cni-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.605530 master-0 kubenswrapper[7146]: I0318 13:08:12.605235 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-node-log\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.605530 master-0 kubenswrapper[7146]: I0318 13:08:12.605273 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-hostroot\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.605530 master-0 kubenswrapper[7146]: I0318 13:08:12.605345 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-etc-kubernetes\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.605530 master-0 kubenswrapper[7146]: I0318 13:08:12.605380 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:08:12.605530 master-0 kubenswrapper[7146]: I0318 13:08:12.605403 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:12.605530 master-0 kubenswrapper[7146]: I0318 13:08:12.605408 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-system-cni-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.605530 master-0 kubenswrapper[7146]: I0318 13:08:12.605471 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-system-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.605530 master-0 kubenswrapper[7146]: E0318 13:08:12.605464 7146 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:08:12.605530 master-0 kubenswrapper[7146]: I0318 13:08:12.605519 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: E0318 13:08:12.605571 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls podName:da6a763d-2777-40c4-ae1f-c77ced406ea2 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:13.105538403 +0000 UTC m=+1.913755764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls") pod "dns-operator-9c5679d8f-bqbzx" (UID: "da6a763d-2777-40c4-ae1f-c77ced406ea2") : secret "metrics-tls" not found Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605593 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-log-socket\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605606 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-system-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: E0318 13:08:12.605596 7146 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: E0318 13:08:12.605659 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605619 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: E0318 13:08:12.605609 7146 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: E0318 13:08:12.605681 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:08:13.105661436 +0000 UTC m=+1.913878877 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605654 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-log-socket\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: E0318 13:08:12.605696 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert podName:35925474-e3fe-4cff-aad6-d853816618c7 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:13.105690287 +0000 UTC m=+1.913907648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert") pod "olm-operator-5c9796789-8r4hr" (UID: "35925474-e3fe-4cff-aad6-d853816618c7") : secret "olm-operator-serving-cert" not found Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605712 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605733 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-ovn\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605766 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-ovn\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: E0318 13:08:12.605770 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls podName:ee1eb80b-5a76-443f-a534-54d5bdc0c98a nodeName:}" failed. No retries permitted until 2026-03-18 13:08:13.105748759 +0000 UTC m=+1.913966120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-jfdn5" (UID: "ee1eb80b-5a76-443f-a534-54d5bdc0c98a") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: E0318 13:08:12.605791 7146 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605795 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-conf-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605815 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-os-release\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: E0318 13:08:12.605830 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs podName:906c0fd3-3bcd-4c6c-8505-b3517bae06b4 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:13.105818871 +0000 UTC m=+1.914036232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zvsmb" (UID: "906c0fd3-3bcd-4c6c-8505-b3517bae06b4") : secret "multus-admission-controller-secret" not found Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605847 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-slash\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605869 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-conf-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605875 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-var-lib-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605877 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-os-release\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605905 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-var-lib-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605911 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-slash\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605932 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-cnibin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605979 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-cnibin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.605986 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8a0944d2-d99a-42eb-81f5-a212b750b8f4-host-etc-kube\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.606024 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-k8s-cni-cncf-io\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.606038 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8a0944d2-d99a-42eb-81f5-a212b750b8f4-host-etc-kube\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.606065 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.606074 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-k8s-cni-cncf-io\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.606097 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.606111 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.606147 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.606130 master-0 kubenswrapper[7146]: I0318 13:08:12.606169 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606190 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-multus\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606214 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606224 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606217 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606248 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606259 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-netd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606274 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-netd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.606277 7146 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606294 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.606316 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics podName:330df925-8429-4b96-9bfe-caa017c21afa nodeName:}" failed. No retries permitted until 2026-03-18 13:08:13.106302354 +0000 UTC m=+1.914519795 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-4v84b" (UID: "330df925-8429-4b96-9bfe-caa017c21afa") : secret "marketplace-operator-metrics" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606316 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-multus\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.606353 7146 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606379 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-systemd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606412 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-kubelet\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.606446 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls podName:f2b92a53-0b61-4e1d-a306-f9a498e48b38 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:13.106437848 +0000 UTC m=+1.914655209 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls") pod "ingress-operator-66b84d69b-xwqsb" (UID: "f2b92a53-0b61-4e1d-a306-f9a498e48b38") : secret "metrics-tls" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606456 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-systemd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606480 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-multus-certs\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606501 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-kubelet\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606519 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-netns\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606537 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-multus-certs\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606545 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-etc-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606563 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-etc-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606575 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-bin\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606615 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-bin\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606598 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-netns\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606623 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-host-slash\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606647 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606669 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-bin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606690 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606710 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606715 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606733 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606750 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-bin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606647 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-host-slash\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606780 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606798 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606800 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.606852 7146 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606879 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.606900 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606904 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-netns\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.606920 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert podName:36db10b8-33a2-4b54-85e2-9809eb6bc37d nodeName:}" failed. No retries permitted until 2026-03-18 13:08:13.106912691 +0000 UTC m=+1.915130052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-kbpvr" (UID: "36db10b8-33a2-4b54-85e2-9809eb6bc37d") : secret "package-server-manager-serving-cert" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606949 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606957 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-netns\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.606990 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-os-release\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.607011 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.607034 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.607062 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.606996 7146 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.607079 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls podName:73c93ee3-cf14-4fea-b2a7-ccfb56e55be4 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:13.107070856 +0000 UTC m=+1.915288207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-n995f" (UID: "73c93ee3-cf14-4fea-b2a7-ccfb56e55be4") : secret "image-registry-operator-tls" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.607118 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-os-release\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.607139 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-systemd-units\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.607159 7146 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.607171 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-socket-dir-parent\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.607172 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-systemd-units\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.607175 7146 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.607213 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-socket-dir-parent\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.607175 7146 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.607196 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:08:13.107178669 +0000 UTC m=+1.915396030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.607259 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:08:13.10724533 +0000 UTC m=+1.915462691 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "node-tuning-operator-tls" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.607271 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert podName:162e25c0-761c-4414-8c29-f6931afdb7b2 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:13.107265071 +0000 UTC m=+1.915482422 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert") pod "cluster-version-operator-56d8475767-l6hzm" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.607224 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.607302 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:08:13.107295372 +0000 UTC m=+1.915512733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.607289 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.607319 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert podName:47f82c03-65d1-4a6c-ba09-8a00ae778009 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:13.107309412 +0000 UTC m=+1.915526773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert") pod "catalog-operator-68f85b4d6c-p9k56" (UID: "47f82c03-65d1-4a6c-ba09-8a00ae778009") : secret "catalog-operator-serving-cert" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.607347 7146 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.607366 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-cnibin\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: E0318 13:08:12.607373 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs podName:5e691486-8540-4b79-8eed-b0fb829071db nodeName:}" failed. No retries permitted until 2026-03-18 13:08:13.107367034 +0000 UTC m=+1.915584395 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs") pod "network-metrics-daemon-kq2j4" (UID: "5e691486-8540-4b79-8eed-b0fb829071db") : secret "metrics-daemon-secret" not found Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.607346 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-cnibin\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.607404 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-kubelet\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.608310 master-0 kubenswrapper[7146]: I0318 13:08:12.607427 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-kubelet\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:12.742104 master-0 kubenswrapper[7146]: E0318 13:08:12.741791 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:08:12.759733 master-0 kubenswrapper[7146]: E0318 13:08:12.756920 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:12.759733 master-0 kubenswrapper[7146]: W0318 13:08:12.757347 7146 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 18 13:08:12.759733 master-0 kubenswrapper[7146]: E0318 13:08:12.758088 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:08:12.759733 master-0 kubenswrapper[7146]: E0318 13:08:12.758291 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:12.759733 master-0 kubenswrapper[7146]: E0318 13:08:12.759273 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:08:12.764611 master-0 kubenswrapper[7146]: I0318 13:08:12.764581 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:08:12.765268 master-0 kubenswrapper[7146]: I0318 13:08:12.765238 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/162e25c0-761c-4414-8c29-f6931afdb7b2-kube-api-access\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:12.872573 master-0 kubenswrapper[7146]: I0318 13:08:12.872466 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9tzl\" (UniqueName: \"kubernetes.io/projected/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-kube-api-access-z9tzl\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:08:12.933151 master-0 kubenswrapper[7146]: I0318 13:08:12.933087 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgh46\" (UniqueName: \"kubernetes.io/projected/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-kube-api-access-rgh46\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:08:12.934194 master-0 kubenswrapper[7146]: I0318 13:08:12.934163 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gv8b\" (UniqueName: \"kubernetes.io/projected/16a930da-d793-486f-bcef-cf042d3c427d-kube-api-access-5gv8b\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:08:12.939171 master-0 kubenswrapper[7146]: I0318 13:08:12.939144 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghzrb\" (UniqueName: \"kubernetes.io/projected/47f82c03-65d1-4a6c-ba09-8a00ae778009-kube-api-access-ghzrb\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:08:12.940666 master-0 kubenswrapper[7146]: I0318 13:08:12.940640 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv9m7\" (UniqueName: \"kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7\") pod \"network-check-target-zlgkc\" (UID: \"2cad2401-dab1-49f7-870e-a742ebfe323f\") " pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:08:12.941341 master-0 kubenswrapper[7146]: I0318 13:08:12.941313 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mlkj\" (UniqueName: \"kubernetes.io/projected/1bf0ea4e-8b08-488f-b252-39580f46b756-kube-api-access-4mlkj\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:08:12.941519 master-0 kubenswrapper[7146]: I0318 13:08:12.941497 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5mgr\" (UniqueName: \"kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-kube-api-access-j5mgr\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:08:12.941655 master-0 kubenswrapper[7146]: I0318 13:08:12.941632 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvxs4\" (UniqueName: \"kubernetes.io/projected/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-kube-api-access-qvxs4\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:08:12.943019 master-0 kubenswrapper[7146]: I0318 13:08:12.942981 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b6rn\" (UniqueName: \"kubernetes.io/projected/5bccf60c-5b07-4f40-8430-12bfb62661c7-kube-api-access-4b6rn\") pod \"csi-snapshot-controller-operator-5f5d689c6b-4s6b8\" (UID: \"5bccf60c-5b07-4f40-8430-12bfb62661c7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8" Mar 18 13:08:12.955676 master-0 kubenswrapper[7146]: I0318 13:08:12.955614 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brvlj\" (UniqueName: \"kubernetes.io/projected/4bc77989-ecfc-4500-92a0-18c2b3b78408-kube-api-access-brvlj\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:08:12.960825 master-0 kubenswrapper[7146]: I0318 13:08:12.960031 7146 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 13:08:13.030189 master-0 kubenswrapper[7146]: I0318 13:08:13.029998 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93ea3c78-dede-468f-89a5-551133f794c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:08:13.031143 master-0 kubenswrapper[7146]: I0318 13:08:13.031095 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2c4572e-0b38-4db1-96e5-6a35e29048e7-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:08:13.031465 master-0 kubenswrapper[7146]: I0318 13:08:13.031245 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k254v\" (UniqueName: \"kubernetes.io/projected/eb8907fd-35dd-452a-8032-f2f95a6e553a-kube-api-access-k254v\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:08:13.034487 master-0 kubenswrapper[7146]: I0318 13:08:13.034427 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-882b8\" (UniqueName: \"kubernetes.io/projected/8a0944d2-d99a-42eb-81f5-a212b750b8f4-kube-api-access-882b8\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:08:13.052882 master-0 kubenswrapper[7146]: I0318 13:08:13.052758 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4lx2\" (UniqueName: \"kubernetes.io/projected/4086d06f-d50e-4632-9da7-508909429eef-kube-api-access-w4lx2\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:08:13.100357 master-0 kubenswrapper[7146]: I0318 13:08:13.100245 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8dfw\" (UniqueName: \"kubernetes.io/projected/8ce8e99d-7b02-4bf4-a438-adde851918cb-kube-api-access-r8dfw\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:08:13.101035 master-0 kubenswrapper[7146]: I0318 13:08:13.100798 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crbvx\" (UniqueName: \"kubernetes.io/projected/369e9689-e2f6-4276-b096-8db094f8d6ae-kube-api-access-crbvx\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:13.111243 master-0 kubenswrapper[7146]: I0318 13:08:13.111198 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhqk9\" (UniqueName: \"kubernetes.io/projected/da6a763d-2777-40c4-ae1f-c77ced406ea2-kube-api-access-lhqk9\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:08:13.113309 master-0 kubenswrapper[7146]: I0318 13:08:13.113263 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:08:13.113309 master-0 kubenswrapper[7146]: I0318 13:08:13.113307 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:08:13.113511 master-0 kubenswrapper[7146]: E0318 13:08:13.113471 7146 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:08:13.113586 master-0 kubenswrapper[7146]: E0318 13:08:13.113559 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics podName:330df925-8429-4b96-9bfe-caa017c21afa nodeName:}" failed. No retries permitted until 2026-03-18 13:08:14.113537379 +0000 UTC m=+2.921754820 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-4v84b" (UID: "330df925-8429-4b96-9bfe-caa017c21afa") : secret "marketplace-operator-metrics" not found Mar 18 13:08:13.113634 master-0 kubenswrapper[7146]: I0318 13:08:13.113593 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:13.113670 master-0 kubenswrapper[7146]: I0318 13:08:13.113626 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:08:13.113710 master-0 kubenswrapper[7146]: E0318 13:08:13.113699 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:08:13.113743 master-0 kubenswrapper[7146]: E0318 13:08:13.113729 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert podName:36db10b8-33a2-4b54-85e2-9809eb6bc37d nodeName:}" failed. No retries permitted until 2026-03-18 13:08:14.113719544 +0000 UTC m=+2.921936995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-kbpvr" (UID: "36db10b8-33a2-4b54-85e2-9809eb6bc37d") : secret "package-server-manager-serving-cert" not found Mar 18 13:08:13.113789 master-0 kubenswrapper[7146]: E0318 13:08:13.113729 7146 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:08:13.113826 master-0 kubenswrapper[7146]: E0318 13:08:13.113796 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:08:14.113778606 +0000 UTC m=+2.921996047 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "node-tuning-operator-tls" not found Mar 18 13:08:13.113863 master-0 kubenswrapper[7146]: I0318 13:08:13.113840 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:08:13.113900 master-0 kubenswrapper[7146]: I0318 13:08:13.113879 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:13.113931 master-0 kubenswrapper[7146]: I0318 13:08:13.113903 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:13.113931 master-0 kubenswrapper[7146]: I0318 13:08:13.113928 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:08:13.114054 master-0 kubenswrapper[7146]: I0318 13:08:13.113962 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:13.114054 master-0 kubenswrapper[7146]: E0318 13:08:13.113970 7146 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:08:13.114054 master-0 kubenswrapper[7146]: I0318 13:08:13.113983 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:08:13.114054 master-0 kubenswrapper[7146]: E0318 13:08:13.114002 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls podName:73c93ee3-cf14-4fea-b2a7-ccfb56e55be4 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:14.113993252 +0000 UTC m=+2.922210613 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-n995f" (UID: "73c93ee3-cf14-4fea-b2a7-ccfb56e55be4") : secret "image-registry-operator-tls" not found Mar 18 13:08:13.114054 master-0 kubenswrapper[7146]: I0318 13:08:13.114026 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:08:13.114054 master-0 kubenswrapper[7146]: E0318 13:08:13.114038 7146 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 13:08:13.114054 master-0 kubenswrapper[7146]: I0318 13:08:13.114055 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:13.114460 master-0 kubenswrapper[7146]: E0318 13:08:13.114062 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs podName:5e691486-8540-4b79-8eed-b0fb829071db nodeName:}" failed. No retries permitted until 2026-03-18 13:08:14.114054974 +0000 UTC m=+2.922272425 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs") pod "network-metrics-daemon-kq2j4" (UID: "5e691486-8540-4b79-8eed-b0fb829071db") : secret "metrics-daemon-secret" not found Mar 18 13:08:13.114460 master-0 kubenswrapper[7146]: E0318 13:08:13.114092 7146 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:08:13.114460 master-0 kubenswrapper[7146]: I0318 13:08:13.114097 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:08:13.114460 master-0 kubenswrapper[7146]: E0318 13:08:13.114162 7146 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:08:13.114460 master-0 kubenswrapper[7146]: E0318 13:08:13.114228 7146 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:08:13.114460 master-0 kubenswrapper[7146]: E0318 13:08:13.114235 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert podName:162e25c0-761c-4414-8c29-f6931afdb7b2 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:14.114107125 +0000 UTC m=+2.922324586 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert") pod "cluster-version-operator-56d8475767-l6hzm" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:08:13.114460 master-0 kubenswrapper[7146]: I0318 13:08:13.114253 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:08:13.114460 master-0 kubenswrapper[7146]: I0318 13:08:13.114277 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:08:13.114460 master-0 kubenswrapper[7146]: E0318 13:08:13.114283 7146 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:08:13.114460 master-0 kubenswrapper[7146]: E0318 13:08:13.114308 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:08:14.114299291 +0000 UTC m=+2.922516732 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:08:13.114460 master-0 kubenswrapper[7146]: E0318 13:08:13.114331 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls podName:ee1eb80b-5a76-443f-a534-54d5bdc0c98a nodeName:}" failed. No retries permitted until 2026-03-18 13:08:14.114322521 +0000 UTC m=+2.922539992 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-jfdn5" (UID: "ee1eb80b-5a76-443f-a534-54d5bdc0c98a") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:08:13.114460 master-0 kubenswrapper[7146]: E0318 13:08:13.114346 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls podName:da6a763d-2777-40c4-ae1f-c77ced406ea2 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:14.114338922 +0000 UTC m=+2.922556393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls") pod "dns-operator-9c5679d8f-bqbzx" (UID: "da6a763d-2777-40c4-ae1f-c77ced406ea2") : secret "metrics-tls" not found Mar 18 13:08:13.116106 master-0 kubenswrapper[7146]: E0318 13:08:13.114453 7146 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:08:13.116106 master-0 kubenswrapper[7146]: E0318 13:08:13.114516 7146 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:08:13.116106 master-0 kubenswrapper[7146]: E0318 13:08:13.114560 7146 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:08:13.116106 master-0 kubenswrapper[7146]: E0318 13:08:13.114563 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls podName:f2b92a53-0b61-4e1d-a306-f9a498e48b38 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:14.114542307 +0000 UTC m=+2.922759678 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls") pod "ingress-operator-66b84d69b-xwqsb" (UID: "f2b92a53-0b61-4e1d-a306-f9a498e48b38") : secret "metrics-tls" not found Mar 18 13:08:13.116106 master-0 kubenswrapper[7146]: E0318 13:08:13.114592 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:08:14.114583009 +0000 UTC m=+2.922800380 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:08:13.116106 master-0 kubenswrapper[7146]: E0318 13:08:13.114462 7146 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:08:13.116106 master-0 kubenswrapper[7146]: E0318 13:08:13.114605 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:08:13.116106 master-0 kubenswrapper[7146]: E0318 13:08:13.114492 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:08:13.116106 master-0 kubenswrapper[7146]: E0318 13:08:13.114625 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert podName:47f82c03-65d1-4a6c-ba09-8a00ae778009 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:14.11461704 +0000 UTC m=+2.922834461 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert") pod "catalog-operator-68f85b4d6c-p9k56" (UID: "47f82c03-65d1-4a6c-ba09-8a00ae778009") : secret "catalog-operator-serving-cert" not found Mar 18 13:08:13.116106 master-0 kubenswrapper[7146]: E0318 13:08:13.114639 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs podName:906c0fd3-3bcd-4c6c-8505-b3517bae06b4 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:14.11463274 +0000 UTC m=+2.922850211 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zvsmb" (UID: "906c0fd3-3bcd-4c6c-8505-b3517bae06b4") : secret "multus-admission-controller-secret" not found Mar 18 13:08:13.116106 master-0 kubenswrapper[7146]: E0318 13:08:13.114654 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert podName:35925474-e3fe-4cff-aad6-d853816618c7 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:14.11464777 +0000 UTC m=+2.922865241 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert") pod "olm-operator-5c9796789-8r4hr" (UID: "35925474-e3fe-4cff-aad6-d853816618c7") : secret "olm-operator-serving-cert" not found Mar 18 13:08:13.116106 master-0 kubenswrapper[7146]: E0318 13:08:13.114672 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:08:14.114667201 +0000 UTC m=+2.922884672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:08:13.178419 master-0 kubenswrapper[7146]: I0318 13:08:13.178305 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sqzx\" (UniqueName: \"kubernetes.io/projected/330df925-8429-4b96-9bfe-caa017c21afa-kube-api-access-2sqzx\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:08:13.180732 master-0 kubenswrapper[7146]: I0318 13:08:13.180579 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8v5n\" (UniqueName: \"kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-kube-api-access-h8v5n\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:08:13.186524 master-0 kubenswrapper[7146]: I0318 13:08:13.186480 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f8xk\" (UniqueName: \"kubernetes.io/projected/cb471665-2b07-48df-9881-3fb663390b23-kube-api-access-6f8xk\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:08:13.192816 master-0 kubenswrapper[7146]: I0318 13:08:13.192771 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnvfd\" (UniqueName: \"kubernetes.io/projected/20dc979a-732b-43b5-acc2-118e4c350470-kube-api-access-wnvfd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:13.208772 master-0 kubenswrapper[7146]: I0318 13:08:13.208725 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hb2q\" (UniqueName: \"kubernetes.io/projected/83a4f641-d28f-42aa-a228-f6086d720fe4-kube-api-access-9hb2q\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:08:13.231425 master-0 kubenswrapper[7146]: I0318 13:08:13.231385 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc69w\" (UniqueName: \"kubernetes.io/projected/a01c92f5-7938-437d-8262-11598bd8023c-kube-api-access-qc69w\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:13.248884 master-0 kubenswrapper[7146]: I0318 13:08:13.248830 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpl28\" (UniqueName: \"kubernetes.io/projected/5e691486-8540-4b79-8eed-b0fb829071db-kube-api-access-lpl28\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:08:13.288229 master-0 kubenswrapper[7146]: I0318 13:08:13.288171 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwfnk\" (UniqueName: \"kubernetes.io/projected/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-kube-api-access-qwfnk\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:08:13.289733 master-0 kubenswrapper[7146]: I0318 13:08:13.289699 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-bound-sa-token\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:08:13.308607 master-0 kubenswrapper[7146]: I0318 13:08:13.308563 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkdqs\" (UniqueName: \"kubernetes.io/projected/36db10b8-33a2-4b54-85e2-9809eb6bc37d-kube-api-access-bkdqs\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:08:13.328436 master-0 kubenswrapper[7146]: I0318 13:08:13.328386 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcm8d\" (UniqueName: \"kubernetes.io/projected/0213214b-693b-411b-8254-48d7826011eb-kube-api-access-xcm8d\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:08:13.347916 master-0 kubenswrapper[7146]: I0318 13:08:13.347865 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwsns\" (UniqueName: \"kubernetes.io/projected/46ae7b31-c91c-477e-a04a-a3a8541747be-kube-api-access-zwsns\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:08:13.369081 master-0 kubenswrapper[7146]: I0318 13:08:13.369025 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw64j\" (UniqueName: \"kubernetes.io/projected/1ad580a2-7f58-4d66-adad-0a53d9777655-kube-api-access-cw64j\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:08:13.405675 master-0 kubenswrapper[7146]: I0318 13:08:13.405614 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzblt\" (UniqueName: \"kubernetes.io/projected/35925474-e3fe-4cff-aad6-d853816618c7-kube-api-access-dzblt\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:08:13.412723 master-0 kubenswrapper[7146]: I0318 13:08:13.412689 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:08:13.782570 master-0 kubenswrapper[7146]: I0318 13:08:13.782491 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:13.904460 master-0 kubenswrapper[7146]: E0318 13:08:13.904352 7146 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71" Mar 18 13:08:13.904602 master-0 kubenswrapper[7146]: E0318 13:08:13.904545 7146 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71,Command:[cluster-openshift-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:ROUTE_CONTROLLER_MANAGER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9tzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-controller-manager-operator-8c94f4649-4qs2l_openshift-controller-manager-operator(c9a9baa5-9334-47dc-8d0c-eafc96a679b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 13:08:13.905771 master-0 kubenswrapper[7146]: E0318 13:08:13.905724 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" podUID="c9a9baa5-9334-47dc-8d0c-eafc96a679b3" Mar 18 13:08:14.128098 master-0 kubenswrapper[7146]: I0318 13:08:14.128028 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:08:14.128098 master-0 kubenswrapper[7146]: I0318 13:08:14.128084 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:08:14.128380 master-0 kubenswrapper[7146]: E0318 13:08:14.128247 7146 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:08:14.128380 master-0 kubenswrapper[7146]: I0318 13:08:14.128265 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:14.128487 master-0 kubenswrapper[7146]: E0318 13:08:14.128349 7146 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:08:14.128487 master-0 kubenswrapper[7146]: E0318 13:08:14.128358 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics podName:330df925-8429-4b96-9bfe-caa017c21afa nodeName:}" failed. No retries permitted until 2026-03-18 13:08:16.128328629 +0000 UTC m=+4.936546070 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-4v84b" (UID: "330df925-8429-4b96-9bfe-caa017c21afa") : secret "marketplace-operator-metrics" not found Mar 18 13:08:14.128487 master-0 kubenswrapper[7146]: I0318 13:08:14.128451 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:08:14.128487 master-0 kubenswrapper[7146]: E0318 13:08:14.128379 7146 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:08:14.128631 master-0 kubenswrapper[7146]: E0318 13:08:14.128480 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls podName:f2b92a53-0b61-4e1d-a306-f9a498e48b38 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:16.128446872 +0000 UTC m=+4.936664283 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls") pod "ingress-operator-66b84d69b-xwqsb" (UID: "f2b92a53-0b61-4e1d-a306-f9a498e48b38") : secret "metrics-tls" not found Mar 18 13:08:14.128631 master-0 kubenswrapper[7146]: E0318 13:08:14.128531 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:08:14.128631 master-0 kubenswrapper[7146]: E0318 13:08:14.128551 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:08:16.128540795 +0000 UTC m=+4.936758256 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "node-tuning-operator-tls" not found Mar 18 13:08:14.128631 master-0 kubenswrapper[7146]: I0318 13:08:14.128521 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:08:14.128631 master-0 kubenswrapper[7146]: E0318 13:08:14.128566 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert podName:36db10b8-33a2-4b54-85e2-9809eb6bc37d nodeName:}" failed. No retries permitted until 2026-03-18 13:08:16.128558725 +0000 UTC m=+4.936776196 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-kbpvr" (UID: "36db10b8-33a2-4b54-85e2-9809eb6bc37d") : secret "package-server-manager-serving-cert" not found Mar 18 13:08:14.128631 master-0 kubenswrapper[7146]: I0318 13:08:14.128584 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:14.128631 master-0 kubenswrapper[7146]: I0318 13:08:14.128610 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:14.128631 master-0 kubenswrapper[7146]: E0318 13:08:14.128626 7146 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:08:14.128972 master-0 kubenswrapper[7146]: I0318 13:08:14.128636 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:08:14.128972 master-0 kubenswrapper[7146]: E0318 13:08:14.128656 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls podName:73c93ee3-cf14-4fea-b2a7-ccfb56e55be4 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:16.128647588 +0000 UTC m=+4.936864949 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-n995f" (UID: "73c93ee3-cf14-4fea-b2a7-ccfb56e55be4") : secret "image-registry-operator-tls" not found Mar 18 13:08:14.128972 master-0 kubenswrapper[7146]: I0318 13:08:14.128675 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:14.128972 master-0 kubenswrapper[7146]: E0318 13:08:14.128692 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:08:14.128972 master-0 kubenswrapper[7146]: I0318 13:08:14.128704 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:08:14.128972 master-0 kubenswrapper[7146]: E0318 13:08:14.128720 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert podName:47f82c03-65d1-4a6c-ba09-8a00ae778009 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:16.128710339 +0000 UTC m=+4.936927700 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert") pod "catalog-operator-68f85b4d6c-p9k56" (UID: "47f82c03-65d1-4a6c-ba09-8a00ae778009") : secret "catalog-operator-serving-cert" not found Mar 18 13:08:14.128972 master-0 kubenswrapper[7146]: E0318 13:08:14.128775 7146 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:08:14.128972 master-0 kubenswrapper[7146]: E0318 13:08:14.128777 7146 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:08:14.128972 master-0 kubenswrapper[7146]: E0318 13:08:14.128804 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:08:16.128794912 +0000 UTC m=+4.937012273 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:08:14.128972 master-0 kubenswrapper[7146]: E0318 13:08:14.128820 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:08:16.128812822 +0000 UTC m=+4.937030303 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:08:14.128972 master-0 kubenswrapper[7146]: E0318 13:08:14.128833 7146 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:08:14.128972 master-0 kubenswrapper[7146]: E0318 13:08:14.128860 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert podName:162e25c0-761c-4414-8c29-f6931afdb7b2 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:16.128851603 +0000 UTC m=+4.937068964 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert") pod "cluster-version-operator-56d8475767-l6hzm" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:08:14.128972 master-0 kubenswrapper[7146]: E0318 13:08:14.128901 7146 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 13:08:14.128972 master-0 kubenswrapper[7146]: E0318 13:08:14.128925 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs podName:5e691486-8540-4b79-8eed-b0fb829071db nodeName:}" failed. No retries permitted until 2026-03-18 13:08:16.128917005 +0000 UTC m=+4.937134436 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs") pod "network-metrics-daemon-kq2j4" (UID: "5e691486-8540-4b79-8eed-b0fb829071db") : secret "metrics-daemon-secret" not found Mar 18 13:08:14.128972 master-0 kubenswrapper[7146]: I0318 13:08:14.128960 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:08:14.129417 master-0 kubenswrapper[7146]: I0318 13:08:14.128986 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:14.129417 master-0 kubenswrapper[7146]: I0318 13:08:14.129014 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:08:14.129417 master-0 kubenswrapper[7146]: E0318 13:08:14.129032 7146 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:08:14.129417 master-0 kubenswrapper[7146]: I0318 13:08:14.129040 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:08:14.129417 master-0 kubenswrapper[7146]: E0318 13:08:14.129064 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls podName:da6a763d-2777-40c4-ae1f-c77ced406ea2 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:16.129054569 +0000 UTC m=+4.937271980 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls") pod "dns-operator-9c5679d8f-bqbzx" (UID: "da6a763d-2777-40c4-ae1f-c77ced406ea2") : secret "metrics-tls" not found Mar 18 13:08:14.129417 master-0 kubenswrapper[7146]: I0318 13:08:14.129083 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:08:14.129417 master-0 kubenswrapper[7146]: E0318 13:08:14.129093 7146 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:08:14.129417 master-0 kubenswrapper[7146]: E0318 13:08:14.129120 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs podName:906c0fd3-3bcd-4c6c-8505-b3517bae06b4 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:16.129111251 +0000 UTC m=+4.937328692 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zvsmb" (UID: "906c0fd3-3bcd-4c6c-8505-b3517bae06b4") : secret "multus-admission-controller-secret" not found Mar 18 13:08:14.129417 master-0 kubenswrapper[7146]: E0318 13:08:14.129164 7146 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:08:14.129417 master-0 kubenswrapper[7146]: E0318 13:08:14.129178 7146 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:08:14.129417 master-0 kubenswrapper[7146]: E0318 13:08:14.129189 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls podName:ee1eb80b-5a76-443f-a534-54d5bdc0c98a nodeName:}" failed. No retries permitted until 2026-03-18 13:08:16.129181353 +0000 UTC m=+4.937398714 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-jfdn5" (UID: "ee1eb80b-5a76-443f-a534-54d5bdc0c98a") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:08:14.129417 master-0 kubenswrapper[7146]: E0318 13:08:14.129238 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:08:16.129197913 +0000 UTC m=+4.937415354 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:08:14.129417 master-0 kubenswrapper[7146]: E0318 13:08:14.129258 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:08:14.129417 master-0 kubenswrapper[7146]: E0318 13:08:14.129287 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert podName:35925474-e3fe-4cff-aad6-d853816618c7 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:16.129278985 +0000 UTC m=+4.937496346 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert") pod "olm-operator-5c9796789-8r4hr" (UID: "35925474-e3fe-4cff-aad6-d853816618c7") : secret "olm-operator-serving-cert" not found Mar 18 13:08:14.237259 master-0 kubenswrapper[7146]: E0318 13:08:14.237101 7146 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483" Mar 18 13:08:14.237504 master-0 kubenswrapper[7146]: E0318 13:08:14.237393 7146 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:openshift-api,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483,Command:[write-available-featuresets --asset-output-dir=/available-featuregates --payload-version=$(OPERATOR_IMAGE_VERSION)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:available-featuregates,ReadOnly:false,MountPath:/available-featuregates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcm8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-config-operator-95bf4f4d-c7nh9_openshift-config-operator(0213214b-693b-411b-8254-48d7826011eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 13:08:14.241149 master-0 kubenswrapper[7146]: E0318 13:08:14.241112 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-api\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" podUID="0213214b-693b-411b-8254-48d7826011eb" Mar 18 13:08:14.442732 master-0 kubenswrapper[7146]: I0318 13:08:14.442692 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:14.446491 master-0 kubenswrapper[7146]: I0318 13:08:14.446450 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:14.508886 master-0 kubenswrapper[7146]: I0318 13:08:14.508364 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:14.534383 master-0 kubenswrapper[7146]: I0318 13:08:14.534326 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:14.881101 master-0 kubenswrapper[7146]: E0318 13:08:14.881032 7146 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e" Mar 18 13:08:14.881274 master-0 kubenswrapper[7146]: E0318 13:08:14.881190 7146 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-apiserver-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e,Command:[cluster-openshift-apiserver-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:KUBE_APISERVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6f8xk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-apiserver-operator-d65958b8-lwfvl_openshift-apiserver-operator(cb471665-2b07-48df-9881-3fb663390b23): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 13:08:14.882441 master-0 kubenswrapper[7146]: E0318 13:08:14.882397 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" podUID="cb471665-2b07-48df-9881-3fb663390b23" Mar 18 13:08:15.111355 master-0 kubenswrapper[7146]: I0318 13:08:15.111286 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:15.115576 master-0 kubenswrapper[7146]: I0318 13:08:15.115543 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:15.364276 master-0 kubenswrapper[7146]: I0318 13:08:15.364215 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:15.369201 master-0 kubenswrapper[7146]: I0318 13:08:15.368999 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:08:15.448132 master-0 kubenswrapper[7146]: I0318 13:08:15.447723 7146 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:08:15.448132 master-0 kubenswrapper[7146]: I0318 13:08:15.447752 7146 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:08:15.475328 master-0 kubenswrapper[7146]: E0318 13:08:15.475270 7146 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252" Mar 18 13:08:15.475500 master-0 kubenswrapper[7146]: E0318 13:08:15.475442 7146 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-storage-version-migrator-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252,Command:[cluster-kube-storage-version-migrator-operator start],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cw64j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf_openshift-kube-storage-version-migrator-operator(1ad580a2-7f58-4d66-adad-0a53d9777655): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 13:08:15.476729 master-0 kubenswrapper[7146]: E0318 13:08:15.476677 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" podUID="1ad580a2-7f58-4d66-adad-0a53d9777655" Mar 18 13:08:15.900297 master-0 kubenswrapper[7146]: E0318 13:08:15.900048 7146 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3" Mar 18 13:08:15.900297 master-0 kubenswrapper[7146]: E0318 13:08:15.900230 7146 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:csi-snapshot-controller-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3,Command:[],Args:[start -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERAND_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24,ValueFrom:nil,},EnvVar{Name:WEBHOOK_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e46378af340ca82a8551fdfa20d0acf4ff4a5d43ceb0d4748eebc55be437d04,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4b6rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-snapshot-controller-operator-5f5d689c6b-4s6b8_openshift-cluster-storage-operator(5bccf60c-5b07-4f40-8430-12bfb62661c7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 13:08:15.901418 master-0 kubenswrapper[7146]: E0318 13:08:15.901380 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshot-controller-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8" podUID="5bccf60c-5b07-4f40-8430-12bfb62661c7" Mar 18 13:08:16.150910 master-0 kubenswrapper[7146]: I0318 13:08:16.150796 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:08:16.150910 master-0 kubenswrapper[7146]: I0318 13:08:16.150848 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:08:16.150910 master-0 kubenswrapper[7146]: I0318 13:08:16.150876 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:08:16.150910 master-0 kubenswrapper[7146]: I0318 13:08:16.150893 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:08:16.150910 master-0 kubenswrapper[7146]: I0318 13:08:16.150916 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:16.151211 master-0 kubenswrapper[7146]: I0318 13:08:16.150931 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:08:16.151211 master-0 kubenswrapper[7146]: I0318 13:08:16.150969 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:08:16.151211 master-0 kubenswrapper[7146]: I0318 13:08:16.150986 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:16.151211 master-0 kubenswrapper[7146]: I0318 13:08:16.151001 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:16.151211 master-0 kubenswrapper[7146]: I0318 13:08:16.151018 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:08:16.151211 master-0 kubenswrapper[7146]: I0318 13:08:16.151032 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:16.151211 master-0 kubenswrapper[7146]: I0318 13:08:16.151049 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:08:16.151211 master-0 kubenswrapper[7146]: I0318 13:08:16.151069 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:08:16.151211 master-0 kubenswrapper[7146]: E0318 13:08:16.151179 7146 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:08:16.151533 master-0 kubenswrapper[7146]: E0318 13:08:16.151232 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls podName:f2b92a53-0b61-4e1d-a306-f9a498e48b38 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.151218609 +0000 UTC m=+8.959435970 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls") pod "ingress-operator-66b84d69b-xwqsb" (UID: "f2b92a53-0b61-4e1d-a306-f9a498e48b38") : secret "metrics-tls" not found Mar 18 13:08:16.151533 master-0 kubenswrapper[7146]: E0318 13:08:16.151327 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:08:16.151533 master-0 kubenswrapper[7146]: E0318 13:08:16.151412 7146 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:08:16.151533 master-0 kubenswrapper[7146]: E0318 13:08:16.151419 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert podName:36db10b8-33a2-4b54-85e2-9809eb6bc37d nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.151401225 +0000 UTC m=+8.959618586 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-kbpvr" (UID: "36db10b8-33a2-4b54-85e2-9809eb6bc37d") : secret "package-server-manager-serving-cert" not found Mar 18 13:08:16.151533 master-0 kubenswrapper[7146]: E0318 13:08:16.151461 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls podName:da6a763d-2777-40c4-ae1f-c77ced406ea2 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.151451746 +0000 UTC m=+8.959669197 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls") pod "dns-operator-9c5679d8f-bqbzx" (UID: "da6a763d-2777-40c4-ae1f-c77ced406ea2") : secret "metrics-tls" not found Mar 18 13:08:16.151533 master-0 kubenswrapper[7146]: E0318 13:08:16.151471 7146 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:08:16.151533 master-0 kubenswrapper[7146]: E0318 13:08:16.151499 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls podName:73c93ee3-cf14-4fea-b2a7-ccfb56e55be4 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.151490387 +0000 UTC m=+8.959707748 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-n995f" (UID: "73c93ee3-cf14-4fea-b2a7-ccfb56e55be4") : secret "image-registry-operator-tls" not found Mar 18 13:08:16.151734 master-0 kubenswrapper[7146]: E0318 13:08:16.151550 7146 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:08:16.151734 master-0 kubenswrapper[7146]: E0318 13:08:16.151580 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert podName:162e25c0-761c-4414-8c29-f6931afdb7b2 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.151572169 +0000 UTC m=+8.959789530 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert") pod "cluster-version-operator-56d8475767-l6hzm" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:08:16.151734 master-0 kubenswrapper[7146]: E0318 13:08:16.151618 7146 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:08:16.151734 master-0 kubenswrapper[7146]: E0318 13:08:16.151635 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.151628941 +0000 UTC m=+8.959846292 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:08:16.151734 master-0 kubenswrapper[7146]: E0318 13:08:16.151666 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:08:16.151734 master-0 kubenswrapper[7146]: E0318 13:08:16.151682 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert podName:47f82c03-65d1-4a6c-ba09-8a00ae778009 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.151676552 +0000 UTC m=+8.959894023 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert") pod "catalog-operator-68f85b4d6c-p9k56" (UID: "47f82c03-65d1-4a6c-ba09-8a00ae778009") : secret "catalog-operator-serving-cert" not found Mar 18 13:08:16.151734 master-0 kubenswrapper[7146]: E0318 13:08:16.151711 7146 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:08:16.151734 master-0 kubenswrapper[7146]: E0318 13:08:16.151726 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.151721683 +0000 UTC m=+8.959939044 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:08:16.151983 master-0 kubenswrapper[7146]: E0318 13:08:16.151764 7146 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:08:16.151983 master-0 kubenswrapper[7146]: E0318 13:08:16.151793 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics podName:330df925-8429-4b96-9bfe-caa017c21afa nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.151784505 +0000 UTC m=+8.960001866 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-4v84b" (UID: "330df925-8429-4b96-9bfe-caa017c21afa") : secret "marketplace-operator-metrics" not found Mar 18 13:08:16.151983 master-0 kubenswrapper[7146]: E0318 13:08:16.151820 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:08:16.151983 master-0 kubenswrapper[7146]: E0318 13:08:16.151838 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert podName:35925474-e3fe-4cff-aad6-d853816618c7 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.151832477 +0000 UTC m=+8.960049838 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert") pod "olm-operator-5c9796789-8r4hr" (UID: "35925474-e3fe-4cff-aad6-d853816618c7") : secret "olm-operator-serving-cert" not found Mar 18 13:08:16.151983 master-0 kubenswrapper[7146]: E0318 13:08:16.151879 7146 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:08:16.151983 master-0 kubenswrapper[7146]: E0318 13:08:16.151906 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs podName:906c0fd3-3bcd-4c6c-8505-b3517bae06b4 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.151898658 +0000 UTC m=+8.960116019 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zvsmb" (UID: "906c0fd3-3bcd-4c6c-8505-b3517bae06b4") : secret "multus-admission-controller-secret" not found Mar 18 13:08:16.151983 master-0 kubenswrapper[7146]: E0318 13:08:16.151928 7146 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 13:08:16.151983 master-0 kubenswrapper[7146]: I0318 13:08:16.151965 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:16.151983 master-0 kubenswrapper[7146]: E0318 13:08:16.151979 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs podName:5e691486-8540-4b79-8eed-b0fb829071db nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.1519691 +0000 UTC m=+8.960186541 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs") pod "network-metrics-daemon-kq2j4" (UID: "5e691486-8540-4b79-8eed-b0fb829071db") : secret "metrics-daemon-secret" not found Mar 18 13:08:16.152214 master-0 kubenswrapper[7146]: I0318 13:08:16.152010 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:08:16.152214 master-0 kubenswrapper[7146]: E0318 13:08:16.152031 7146 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:08:16.152214 master-0 kubenswrapper[7146]: E0318 13:08:16.152060 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.152051763 +0000 UTC m=+8.960269184 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:08:16.152214 master-0 kubenswrapper[7146]: E0318 13:08:16.152067 7146 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:08:16.152214 master-0 kubenswrapper[7146]: E0318 13:08:16.152089 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls podName:ee1eb80b-5a76-443f-a534-54d5bdc0c98a nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.152082814 +0000 UTC m=+8.960300175 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-jfdn5" (UID: "ee1eb80b-5a76-443f-a534-54d5bdc0c98a") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:08:16.152214 master-0 kubenswrapper[7146]: E0318 13:08:16.152104 7146 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:08:16.152214 master-0 kubenswrapper[7146]: E0318 13:08:16.152141 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:08:20.152130585 +0000 UTC m=+8.960347946 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "node-tuning-operator-tls" not found Mar 18 13:08:16.451673 master-0 kubenswrapper[7146]: I0318 13:08:16.451631 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" event={"ID":"16a930da-d793-486f-bcef-cf042d3c427d","Type":"ContainerStarted","Data":"7c8f77a7d65f8fc3bc4cbe1de5c1b2400c99f286cccd6e89e58de1418e09f721"} Mar 18 13:08:16.451673 master-0 kubenswrapper[7146]: I0318 13:08:16.451666 7146 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:08:17.455735 master-0 kubenswrapper[7146]: I0318 13:08:17.455391 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" event={"ID":"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41","Type":"ContainerStarted","Data":"73eeb12fc6c56e08bfbb513524488ba1e9f64fd246eaef82ed0bfd67ecb4ec86"} Mar 18 13:08:17.457532 master-0 kubenswrapper[7146]: I0318 13:08:17.457507 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" event={"ID":"93ea3c78-dede-468f-89a5-551133f794c5","Type":"ContainerStarted","Data":"ef423dc670cb4c823cf16513eca393eb2237d93c1c3d72d4a3125b276f8fdce7"} Mar 18 13:08:17.458978 master-0 kubenswrapper[7146]: I0318 13:08:17.458917 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" event={"ID":"83a4f641-d28f-42aa-a228-f6086d720fe4","Type":"ContainerStarted","Data":"f0be59386377b23fb8fc7601c10eb271b7e5a273e5f53453eae290b11eb4345f"} Mar 18 13:08:17.459996 master-0 kubenswrapper[7146]: I0318 13:08:17.459929 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" event={"ID":"1bf0ea4e-8b08-488f-b252-39580f46b756","Type":"ContainerStarted","Data":"fd0bf4a4bcfb53e14fbaa9e4b5ac94436e182002bb238e07513655ae02a57f1d"} Mar 18 13:08:17.468430 master-0 kubenswrapper[7146]: I0318 13:08:17.468391 7146 generic.go:334] "Generic (PLEG): container finished" podID="16a930da-d793-486f-bcef-cf042d3c427d" containerID="7c8f77a7d65f8fc3bc4cbe1de5c1b2400c99f286cccd6e89e58de1418e09f721" exitCode=0 Mar 18 13:08:17.468430 master-0 kubenswrapper[7146]: I0318 13:08:17.468430 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" event={"ID":"16a930da-d793-486f-bcef-cf042d3c427d","Type":"ContainerDied","Data":"7c8f77a7d65f8fc3bc4cbe1de5c1b2400c99f286cccd6e89e58de1418e09f721"} Mar 18 13:08:18.182770 master-0 kubenswrapper[7146]: I0318 13:08:18.182693 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:18.183061 master-0 kubenswrapper[7146]: I0318 13:08:18.182853 7146 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:08:18.183061 master-0 kubenswrapper[7146]: I0318 13:08:18.182867 7146 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:08:18.208343 master-0 kubenswrapper[7146]: I0318 13:08:18.208276 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:18.472106 master-0 kubenswrapper[7146]: I0318 13:08:18.472017 7146 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:08:20.043107 master-0 kubenswrapper[7146]: I0318 13:08:20.043060 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:08:20.055558 master-0 kubenswrapper[7146]: I0318 13:08:20.055512 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:20.055771 master-0 kubenswrapper[7146]: I0318 13:08:20.055653 7146 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:08:20.060622 master-0 kubenswrapper[7146]: I0318 13:08:20.060580 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:20.203025 master-0 kubenswrapper[7146]: I0318 13:08:20.202958 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:20.203025 master-0 kubenswrapper[7146]: I0318 13:08:20.203016 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:08:20.203365 master-0 kubenswrapper[7146]: I0318 13:08:20.203049 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:08:20.203365 master-0 kubenswrapper[7146]: I0318 13:08:20.203073 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:20.203365 master-0 kubenswrapper[7146]: I0318 13:08:20.203095 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:20.203365 master-0 kubenswrapper[7146]: E0318 13:08:20.203240 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:08:20.203365 master-0 kubenswrapper[7146]: E0318 13:08:20.203333 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert podName:36db10b8-33a2-4b54-85e2-9809eb6bc37d nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.203309877 +0000 UTC m=+17.011527318 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-kbpvr" (UID: "36db10b8-33a2-4b54-85e2-9809eb6bc37d") : secret "package-server-manager-serving-cert" not found Mar 18 13:08:20.203557 master-0 kubenswrapper[7146]: I0318 13:08:20.203459 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:08:20.203630 master-0 kubenswrapper[7146]: E0318 13:08:20.203604 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:08:20.203666 master-0 kubenswrapper[7146]: E0318 13:08:20.203660 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert podName:47f82c03-65d1-4a6c-ba09-8a00ae778009 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.203644117 +0000 UTC m=+17.011861478 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert") pod "catalog-operator-68f85b4d6c-p9k56" (UID: "47f82c03-65d1-4a6c-ba09-8a00ae778009") : secret "catalog-operator-serving-cert" not found Mar 18 13:08:20.203700 master-0 kubenswrapper[7146]: E0318 13:08:20.203679 7146 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:08:20.203761 master-0 kubenswrapper[7146]: E0318 13:08:20.203735 7146 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:08:20.203792 master-0 kubenswrapper[7146]: E0318 13:08:20.203740 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.203720389 +0000 UTC m=+17.011937850 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:08:20.203827 master-0 kubenswrapper[7146]: E0318 13:08:20.203806 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert podName:162e25c0-761c-4414-8c29-f6931afdb7b2 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.203783591 +0000 UTC m=+17.012001052 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert") pod "cluster-version-operator-56d8475767-l6hzm" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:08:20.203827 master-0 kubenswrapper[7146]: E0318 13:08:20.203814 7146 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:08:20.203884 master-0 kubenswrapper[7146]: E0318 13:08:20.203846 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.203837692 +0000 UTC m=+17.012055053 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "node-tuning-operator-tls" not found Mar 18 13:08:20.203930 master-0 kubenswrapper[7146]: I0318 13:08:20.203879 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:20.203930 master-0 kubenswrapper[7146]: I0318 13:08:20.203910 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:08:20.204053 master-0 kubenswrapper[7146]: I0318 13:08:20.203970 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:08:20.204053 master-0 kubenswrapper[7146]: E0318 13:08:20.203972 7146 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:08:20.204053 master-0 kubenswrapper[7146]: E0318 13:08:20.203983 7146 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:08:20.204053 master-0 kubenswrapper[7146]: E0318 13:08:20.204013 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls podName:73c93ee3-cf14-4fea-b2a7-ccfb56e55be4 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.204004637 +0000 UTC m=+17.012221988 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-n995f" (UID: "73c93ee3-cf14-4fea-b2a7-ccfb56e55be4") : secret "image-registry-operator-tls" not found Mar 18 13:08:20.204053 master-0 kubenswrapper[7146]: E0318 13:08:20.204020 7146 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 13:08:20.204053 master-0 kubenswrapper[7146]: E0318 13:08:20.204033 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert podName:369e9689-e2f6-4276-b096-8db094f8d6ae nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.204023677 +0000 UTC m=+17.012241168 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-p6tvz" (UID: "369e9689-e2f6-4276-b096-8db094f8d6ae") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:08:20.204231 master-0 kubenswrapper[7146]: I0318 13:08:20.204063 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:20.204231 master-0 kubenswrapper[7146]: E0318 13:08:20.204071 7146 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:08:20.204231 master-0 kubenswrapper[7146]: E0318 13:08:20.204100 7146 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:08:20.204231 master-0 kubenswrapper[7146]: E0318 13:08:20.204088 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs podName:5e691486-8540-4b79-8eed-b0fb829071db nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.204077019 +0000 UTC m=+17.012294470 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs") pod "network-metrics-daemon-kq2j4" (UID: "5e691486-8540-4b79-8eed-b0fb829071db") : secret "metrics-daemon-secret" not found Mar 18 13:08:20.204231 master-0 kubenswrapper[7146]: E0318 13:08:20.204138 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls podName:a01c92f5-7938-437d-8262-11598bd8023c nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.20412792 +0000 UTC m=+17.012345351 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-7w5g8" (UID: "a01c92f5-7938-437d-8262-11598bd8023c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:08:20.204231 master-0 kubenswrapper[7146]: I0318 13:08:20.204158 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:08:20.204231 master-0 kubenswrapper[7146]: I0318 13:08:20.204188 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:08:20.204231 master-0 kubenswrapper[7146]: E0318 13:08:20.204196 7146 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:08:20.204231 master-0 kubenswrapper[7146]: E0318 13:08:20.204202 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls podName:da6a763d-2777-40c4-ae1f-c77ced406ea2 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.204192592 +0000 UTC m=+17.012409953 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls") pod "dns-operator-9c5679d8f-bqbzx" (UID: "da6a763d-2777-40c4-ae1f-c77ced406ea2") : secret "metrics-tls" not found Mar 18 13:08:20.204231 master-0 kubenswrapper[7146]: I0318 13:08:20.204226 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:08:20.204679 master-0 kubenswrapper[7146]: E0318 13:08:20.204228 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls podName:ee1eb80b-5a76-443f-a534-54d5bdc0c98a nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.204220123 +0000 UTC m=+17.012437484 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-jfdn5" (UID: "ee1eb80b-5a76-443f-a534-54d5bdc0c98a") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:08:20.204679 master-0 kubenswrapper[7146]: I0318 13:08:20.204289 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:08:20.204679 master-0 kubenswrapper[7146]: I0318 13:08:20.204320 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:08:20.204679 master-0 kubenswrapper[7146]: E0318 13:08:20.204247 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:08:20.204679 master-0 kubenswrapper[7146]: E0318 13:08:20.204462 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert podName:35925474-e3fe-4cff-aad6-d853816618c7 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.204449259 +0000 UTC m=+17.012666720 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert") pod "olm-operator-5c9796789-8r4hr" (UID: "35925474-e3fe-4cff-aad6-d853816618c7") : secret "olm-operator-serving-cert" not found Mar 18 13:08:20.205031 master-0 kubenswrapper[7146]: E0318 13:08:20.204332 7146 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:08:20.205120 master-0 kubenswrapper[7146]: E0318 13:08:20.205044 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs podName:906c0fd3-3bcd-4c6c-8505-b3517bae06b4 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.205033556 +0000 UTC m=+17.013250917 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zvsmb" (UID: "906c0fd3-3bcd-4c6c-8505-b3517bae06b4") : secret "multus-admission-controller-secret" not found Mar 18 13:08:20.205120 master-0 kubenswrapper[7146]: E0318 13:08:20.204372 7146 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:08:20.205120 master-0 kubenswrapper[7146]: E0318 13:08:20.205080 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics podName:330df925-8429-4b96-9bfe-caa017c21afa nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.205073677 +0000 UTC m=+17.013291158 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-4v84b" (UID: "330df925-8429-4b96-9bfe-caa017c21afa") : secret "marketplace-operator-metrics" not found Mar 18 13:08:20.205120 master-0 kubenswrapper[7146]: E0318 13:08:20.204409 7146 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:08:20.205328 master-0 kubenswrapper[7146]: E0318 13:08:20.205123 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls podName:f2b92a53-0b61-4e1d-a306-f9a498e48b38 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.205116358 +0000 UTC m=+17.013333719 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls") pod "ingress-operator-66b84d69b-xwqsb" (UID: "f2b92a53-0b61-4e1d-a306-f9a498e48b38") : secret "metrics-tls" not found Mar 18 13:08:20.226303 master-0 kubenswrapper[7146]: I0318 13:08:20.226262 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-855bx"] Mar 18 13:08:20.226543 master-0 kubenswrapper[7146]: E0318 13:08:20.226405 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0403564-f8d9-4d81-b9e3-d9028fe58590" containerName="assisted-installer-controller" Mar 18 13:08:20.226543 master-0 kubenswrapper[7146]: I0318 13:08:20.226418 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0403564-f8d9-4d81-b9e3-d9028fe58590" containerName="assisted-installer-controller" Mar 18 13:08:20.226543 master-0 kubenswrapper[7146]: E0318 13:08:20.226430 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c6ab19-9232-47bd-95da-136641cc3f2d" containerName="prober" Mar 18 13:08:20.226543 master-0 kubenswrapper[7146]: I0318 13:08:20.226437 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c6ab19-9232-47bd-95da-136641cc3f2d" containerName="prober" Mar 18 13:08:20.226543 master-0 kubenswrapper[7146]: I0318 13:08:20.226488 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c6ab19-9232-47bd-95da-136641cc3f2d" containerName="prober" Mar 18 13:08:20.226543 master-0 kubenswrapper[7146]: I0318 13:08:20.226496 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0403564-f8d9-4d81-b9e3-d9028fe58590" containerName="assisted-installer-controller" Mar 18 13:08:20.226737 master-0 kubenswrapper[7146]: I0318 13:08:20.226719 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" Mar 18 13:08:20.228457 master-0 kubenswrapper[7146]: I0318 13:08:20.228434 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 13:08:20.228544 master-0 kubenswrapper[7146]: I0318 13:08:20.228518 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 13:08:20.228708 master-0 kubenswrapper[7146]: I0318 13:08:20.228690 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 13:08:20.228831 master-0 kubenswrapper[7146]: I0318 13:08:20.228813 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 13:08:20.232990 master-0 kubenswrapper[7146]: I0318 13:08:20.232951 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-855bx"] Mar 18 13:08:20.305886 master-0 kubenswrapper[7146]: I0318 13:08:20.305478 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3a039fc2-b0af-4b2c-a884-1c274c08064d-signing-cabundle\") pod \"service-ca-79bc6b8d76-855bx\" (UID: \"3a039fc2-b0af-4b2c-a884-1c274c08064d\") " pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" Mar 18 13:08:20.305886 master-0 kubenswrapper[7146]: I0318 13:08:20.305840 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3a039fc2-b0af-4b2c-a884-1c274c08064d-signing-key\") pod \"service-ca-79bc6b8d76-855bx\" (UID: \"3a039fc2-b0af-4b2c-a884-1c274c08064d\") " pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" Mar 18 13:08:20.305886 master-0 kubenswrapper[7146]: I0318 13:08:20.305873 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmmhd\" (UniqueName: \"kubernetes.io/projected/3a039fc2-b0af-4b2c-a884-1c274c08064d-kube-api-access-pmmhd\") pod \"service-ca-79bc6b8d76-855bx\" (UID: \"3a039fc2-b0af-4b2c-a884-1c274c08064d\") " pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" Mar 18 13:08:20.407114 master-0 kubenswrapper[7146]: I0318 13:08:20.407063 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3a039fc2-b0af-4b2c-a884-1c274c08064d-signing-cabundle\") pod \"service-ca-79bc6b8d76-855bx\" (UID: \"3a039fc2-b0af-4b2c-a884-1c274c08064d\") " pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" Mar 18 13:08:20.407320 master-0 kubenswrapper[7146]: I0318 13:08:20.407128 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3a039fc2-b0af-4b2c-a884-1c274c08064d-signing-key\") pod \"service-ca-79bc6b8d76-855bx\" (UID: \"3a039fc2-b0af-4b2c-a884-1c274c08064d\") " pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" Mar 18 13:08:20.407320 master-0 kubenswrapper[7146]: I0318 13:08:20.407160 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmmhd\" (UniqueName: \"kubernetes.io/projected/3a039fc2-b0af-4b2c-a884-1c274c08064d-kube-api-access-pmmhd\") pod \"service-ca-79bc6b8d76-855bx\" (UID: \"3a039fc2-b0af-4b2c-a884-1c274c08064d\") " pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" Mar 18 13:08:20.408350 master-0 kubenswrapper[7146]: I0318 13:08:20.408332 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3a039fc2-b0af-4b2c-a884-1c274c08064d-signing-cabundle\") pod \"service-ca-79bc6b8d76-855bx\" (UID: \"3a039fc2-b0af-4b2c-a884-1c274c08064d\") " pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" Mar 18 13:08:20.409317 master-0 kubenswrapper[7146]: I0318 13:08:20.409287 7146 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 13:08:20.415069 master-0 kubenswrapper[7146]: I0318 13:08:20.415023 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3a039fc2-b0af-4b2c-a884-1c274c08064d-signing-key\") pod \"service-ca-79bc6b8d76-855bx\" (UID: \"3a039fc2-b0af-4b2c-a884-1c274c08064d\") " pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" Mar 18 13:08:20.429001 master-0 kubenswrapper[7146]: I0318 13:08:20.428971 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmmhd\" (UniqueName: \"kubernetes.io/projected/3a039fc2-b0af-4b2c-a884-1c274c08064d-kube-api-access-pmmhd\") pod \"service-ca-79bc6b8d76-855bx\" (UID: \"3a039fc2-b0af-4b2c-a884-1c274c08064d\") " pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" Mar 18 13:08:20.550995 master-0 kubenswrapper[7146]: I0318 13:08:20.550897 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" Mar 18 13:08:20.881277 master-0 kubenswrapper[7146]: I0318 13:08:20.881116 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-855bx"] Mar 18 13:08:20.905383 master-0 kubenswrapper[7146]: W0318 13:08:20.905291 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a039fc2_b0af_4b2c_a884_1c274c08064d.slice/crio-681658b0b14bf79757ec7e2bf815ef5e737aa8b1612a9d7bf59a35cb9f00495b WatchSource:0}: Error finding container 681658b0b14bf79757ec7e2bf815ef5e737aa8b1612a9d7bf59a35cb9f00495b: Status 404 returned error can't find the container with id 681658b0b14bf79757ec7e2bf815ef5e737aa8b1612a9d7bf59a35cb9f00495b Mar 18 13:08:21.497527 master-0 kubenswrapper[7146]: I0318 13:08:21.497085 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" event={"ID":"3a039fc2-b0af-4b2c-a884-1c274c08064d","Type":"ContainerStarted","Data":"d7e8c2fdb968a1130191a8765d10f0d71f285ef10fc757a0ab5ebbff82c6fcc5"} Mar 18 13:08:21.497527 master-0 kubenswrapper[7146]: I0318 13:08:21.497537 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" event={"ID":"3a039fc2-b0af-4b2c-a884-1c274c08064d","Type":"ContainerStarted","Data":"681658b0b14bf79757ec7e2bf815ef5e737aa8b1612a9d7bf59a35cb9f00495b"} Mar 18 13:08:21.500251 master-0 kubenswrapper[7146]: I0318 13:08:21.500142 7146 generic.go:334] "Generic (PLEG): container finished" podID="16a930da-d793-486f-bcef-cf042d3c427d" containerID="aa564c30adb5b4df8107a74993a455b716489617f02c382f60c47021de96afac" exitCode=0 Mar 18 13:08:21.500251 master-0 kubenswrapper[7146]: I0318 13:08:21.500209 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" event={"ID":"16a930da-d793-486f-bcef-cf042d3c427d","Type":"ContainerDied","Data":"aa564c30adb5b4df8107a74993a455b716489617f02c382f60c47021de96afac"} Mar 18 13:08:21.534179 master-0 kubenswrapper[7146]: I0318 13:08:21.534080 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" podStartSLOduration=1.534052169 podStartE2EDuration="1.534052169s" podCreationTimestamp="2026-03-18 13:08:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:08:21.532439524 +0000 UTC m=+10.340656905" watchObservedRunningTime="2026-03-18 13:08:21.534052169 +0000 UTC m=+10.342269560" Mar 18 13:08:21.829640 master-0 kubenswrapper[7146]: I0318 13:08:21.829479 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:21.833742 master-0 kubenswrapper[7146]: I0318 13:08:21.833700 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:22.513303 master-0 kubenswrapper[7146]: I0318 13:08:22.513249 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:26.521901 master-0 kubenswrapper[7146]: I0318 13:08:26.521810 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" event={"ID":"c9a9baa5-9334-47dc-8d0c-eafc96a679b3","Type":"ContainerStarted","Data":"50dc217c7e050a83d8f94c0b071aa6cc499aaacdf4273693193aaa83fb657bb6"} Mar 18 13:08:27.526837 master-0 kubenswrapper[7146]: I0318 13:08:27.526794 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" event={"ID":"16a930da-d793-486f-bcef-cf042d3c427d","Type":"ContainerStarted","Data":"05341255eca6050db7cb2260fb5dd7a45d91c7026314e974f2c6b81b9259883f"} Mar 18 13:08:27.529743 master-0 kubenswrapper[7146]: I0318 13:08:27.529710 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" event={"ID":"1ad580a2-7f58-4d66-adad-0a53d9777655","Type":"ContainerStarted","Data":"efc42902b5c4767324208b71a30ab164f9e409ceb38a1c7d04d92fd8042f56d6"} Mar 18 13:08:27.531214 master-0 kubenswrapper[7146]: I0318 13:08:27.531182 7146 generic.go:334] "Generic (PLEG): container finished" podID="0213214b-693b-411b-8254-48d7826011eb" containerID="5c89794c76d4515a3e7d3c02069fb4c61a25855d4eed6b9182b128d2ddf1520d" exitCode=0 Mar 18 13:08:27.531287 master-0 kubenswrapper[7146]: I0318 13:08:27.531218 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" event={"ID":"0213214b-693b-411b-8254-48d7826011eb","Type":"ContainerDied","Data":"5c89794c76d4515a3e7d3c02069fb4c61a25855d4eed6b9182b128d2ddf1520d"} Mar 18 13:08:27.683959 master-0 kubenswrapper[7146]: I0318 13:08:27.682498 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-t4hlq"] Mar 18 13:08:27.683959 master-0 kubenswrapper[7146]: I0318 13:08:27.683078 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:27.690960 master-0 kubenswrapper[7146]: I0318 13:08:27.688399 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:08:27.690960 master-0 kubenswrapper[7146]: I0318 13:08:27.688874 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:08:27.690960 master-0 kubenswrapper[7146]: I0318 13:08:27.688982 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:08:27.690960 master-0 kubenswrapper[7146]: I0318 13:08:27.689000 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:08:27.690960 master-0 kubenswrapper[7146]: I0318 13:08:27.689054 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:08:27.690960 master-0 kubenswrapper[7146]: I0318 13:08:27.689166 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:08:27.701332 master-0 kubenswrapper[7146]: I0318 13:08:27.701235 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-t4hlq"] Mar 18 13:08:27.752767 master-0 kubenswrapper[7146]: I0318 13:08:27.752713 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-config\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:27.752767 master-0 kubenswrapper[7146]: I0318 13:08:27.752754 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxb68\" (UniqueName: \"kubernetes.io/projected/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-kube-api-access-zxb68\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:27.752767 master-0 kubenswrapper[7146]: I0318 13:08:27.752779 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:27.753033 master-0 kubenswrapper[7146]: I0318 13:08:27.752833 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-client-ca\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:27.753033 master-0 kubenswrapper[7146]: I0318 13:08:27.752885 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-serving-cert\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:27.853642 master-0 kubenswrapper[7146]: I0318 13:08:27.853497 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-config\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:27.853642 master-0 kubenswrapper[7146]: I0318 13:08:27.853581 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxb68\" (UniqueName: \"kubernetes.io/projected/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-kube-api-access-zxb68\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:27.853642 master-0 kubenswrapper[7146]: I0318 13:08:27.853622 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:27.854062 master-0 kubenswrapper[7146]: E0318 13:08:27.853654 7146 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 18 13:08:27.854062 master-0 kubenswrapper[7146]: E0318 13:08:27.853737 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-config podName:eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.353713906 +0000 UTC m=+17.161931267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-config") pod "controller-manager-f5df8899c-t4hlq" (UID: "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00") : configmap "config" not found Mar 18 13:08:27.854062 master-0 kubenswrapper[7146]: E0318 13:08:27.853789 7146 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 18 13:08:27.854062 master-0 kubenswrapper[7146]: I0318 13:08:27.853801 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-client-ca\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:27.854062 master-0 kubenswrapper[7146]: E0318 13:08:27.853857 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-proxy-ca-bundles podName:eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.353836989 +0000 UTC m=+17.162054380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-proxy-ca-bundles") pod "controller-manager-f5df8899c-t4hlq" (UID: "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00") : configmap "openshift-global-ca" not found Mar 18 13:08:27.854062 master-0 kubenswrapper[7146]: I0318 13:08:27.854044 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-serving-cert\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:27.854297 master-0 kubenswrapper[7146]: E0318 13:08:27.854189 7146 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:08:27.854297 master-0 kubenswrapper[7146]: E0318 13:08:27.853860 7146 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:08:27.854297 master-0 kubenswrapper[7146]: E0318 13:08:27.854232 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-serving-cert podName:eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.35421888 +0000 UTC m=+17.162436271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-serving-cert") pod "controller-manager-f5df8899c-t4hlq" (UID: "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00") : secret "serving-cert" not found Mar 18 13:08:27.854297 master-0 kubenswrapper[7146]: E0318 13:08:27.854257 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-client-ca podName:eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:28.354245991 +0000 UTC m=+17.162463392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-client-ca") pod "controller-manager-f5df8899c-t4hlq" (UID: "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00") : configmap "client-ca" not found Mar 18 13:08:27.881275 master-0 kubenswrapper[7146]: I0318 13:08:27.881214 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxb68\" (UniqueName: \"kubernetes.io/projected/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-kube-api-access-zxb68\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:28.239141 master-0 kubenswrapper[7146]: I0318 13:08:28.237695 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-vf6mv"] Mar 18 13:08:28.239141 master-0 kubenswrapper[7146]: I0318 13:08:28.238345 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-vf6mv" Mar 18 13:08:28.245458 master-0 kubenswrapper[7146]: I0318 13:08:28.243234 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 13:08:28.245458 master-0 kubenswrapper[7146]: I0318 13:08:28.243249 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 13:08:28.257624 master-0 kubenswrapper[7146]: I0318 13:08:28.256717 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:08:28.257624 master-0 kubenswrapper[7146]: I0318 13:08:28.256777 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:08:28.257624 master-0 kubenswrapper[7146]: I0318 13:08:28.256812 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:28.257624 master-0 kubenswrapper[7146]: I0318 13:08:28.256842 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:08:28.257624 master-0 kubenswrapper[7146]: I0318 13:08:28.256888 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:08:28.257624 master-0 kubenswrapper[7146]: I0318 13:08:28.256917 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:28.257624 master-0 kubenswrapper[7146]: I0318 13:08:28.256963 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:28.257624 master-0 kubenswrapper[7146]: I0318 13:08:28.256991 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:08:28.257624 master-0 kubenswrapper[7146]: I0318 13:08:28.257012 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:28.257624 master-0 kubenswrapper[7146]: I0318 13:08:28.257035 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:08:28.257624 master-0 kubenswrapper[7146]: I0318 13:08:28.257076 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:08:28.257624 master-0 kubenswrapper[7146]: I0318 13:08:28.257100 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:28.257624 master-0 kubenswrapper[7146]: I0318 13:08:28.257129 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:08:28.257624 master-0 kubenswrapper[7146]: I0318 13:08:28.257153 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:08:28.257624 master-0 kubenswrapper[7146]: I0318 13:08:28.257177 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:08:28.257624 master-0 kubenswrapper[7146]: E0318 13:08:28.257350 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:08:28.257624 master-0 kubenswrapper[7146]: E0318 13:08:28.257408 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert podName:47f82c03-65d1-4a6c-ba09-8a00ae778009 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:44.257389413 +0000 UTC m=+33.065606774 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert") pod "catalog-operator-68f85b4d6c-p9k56" (UID: "47f82c03-65d1-4a6c-ba09-8a00ae778009") : secret "catalog-operator-serving-cert" not found Mar 18 13:08:28.259080 master-0 kubenswrapper[7146]: E0318 13:08:28.258841 7146 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 13:08:28.259080 master-0 kubenswrapper[7146]: E0318 13:08:28.258922 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs podName:5e691486-8540-4b79-8eed-b0fb829071db nodeName:}" failed. No retries permitted until 2026-03-18 13:08:44.258904065 +0000 UTC m=+33.067121426 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs") pod "network-metrics-daemon-kq2j4" (UID: "5e691486-8540-4b79-8eed-b0fb829071db") : secret "metrics-daemon-secret" not found Mar 18 13:08:28.259233 master-0 kubenswrapper[7146]: E0318 13:08:28.259171 7146 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:08:28.259363 master-0 kubenswrapper[7146]: E0318 13:08:28.259253 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics podName:330df925-8429-4b96-9bfe-caa017c21afa nodeName:}" failed. No retries permitted until 2026-03-18 13:08:44.259228424 +0000 UTC m=+33.067445805 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-4v84b" (UID: "330df925-8429-4b96-9bfe-caa017c21afa") : secret "marketplace-operator-metrics" not found Mar 18 13:08:28.259422 master-0 kubenswrapper[7146]: E0318 13:08:28.259358 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:08:28.259422 master-0 kubenswrapper[7146]: E0318 13:08:28.259409 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert podName:35925474-e3fe-4cff-aad6-d853816618c7 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:44.259395619 +0000 UTC m=+33.067613050 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert") pod "olm-operator-5c9796789-8r4hr" (UID: "35925474-e3fe-4cff-aad6-d853816618c7") : secret "olm-operator-serving-cert" not found Mar 18 13:08:28.259422 master-0 kubenswrapper[7146]: E0318 13:08:28.259417 7146 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:08:28.259625 master-0 kubenswrapper[7146]: E0318 13:08:28.259451 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert podName:36db10b8-33a2-4b54-85e2-9809eb6bc37d nodeName:}" failed. No retries permitted until 2026-03-18 13:08:44.25944152 +0000 UTC m=+33.067658891 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-kbpvr" (UID: "36db10b8-33a2-4b54-85e2-9809eb6bc37d") : secret "package-server-manager-serving-cert" not found Mar 18 13:08:28.259625 master-0 kubenswrapper[7146]: E0318 13:08:28.259569 7146 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:08:28.259625 master-0 kubenswrapper[7146]: E0318 13:08:28.259599 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls podName:ee1eb80b-5a76-443f-a534-54d5bdc0c98a nodeName:}" failed. No retries permitted until 2026-03-18 13:08:44.259590434 +0000 UTC m=+33.067807805 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-jfdn5" (UID: "ee1eb80b-5a76-443f-a534-54d5bdc0c98a") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:08:28.259827 master-0 kubenswrapper[7146]: E0318 13:08:28.259719 7146 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:08:28.259827 master-0 kubenswrapper[7146]: E0318 13:08:28.259747 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs podName:906c0fd3-3bcd-4c6c-8505-b3517bae06b4 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:44.259738219 +0000 UTC m=+33.067955590 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zvsmb" (UID: "906c0fd3-3bcd-4c6c-8505-b3517bae06b4") : secret "multus-admission-controller-secret" not found Mar 18 13:08:28.268714 master-0 kubenswrapper[7146]: I0318 13:08:28.268251 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:08:28.268714 master-0 kubenswrapper[7146]: I0318 13:08:28.268410 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:28.268714 master-0 kubenswrapper[7146]: I0318 13:08:28.268444 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:28.268714 master-0 kubenswrapper[7146]: I0318 13:08:28.268412 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:08:28.268714 master-0 kubenswrapper[7146]: I0318 13:08:28.268518 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:28.269184 master-0 kubenswrapper[7146]: I0318 13:08:28.268813 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:28.270948 master-0 kubenswrapper[7146]: I0318 13:08:28.269447 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:08:28.270948 master-0 kubenswrapper[7146]: I0318 13:08:28.270735 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"cluster-version-operator-56d8475767-l6hzm\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:28.281883 master-0 kubenswrapper[7146]: I0318 13:08:28.281772 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-vf6mv"] Mar 18 13:08:28.358493 master-0 kubenswrapper[7146]: I0318 13:08:28.358319 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-config\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:28.358493 master-0 kubenswrapper[7146]: I0318 13:08:28.358373 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:28.358493 master-0 kubenswrapper[7146]: E0318 13:08:28.358401 7146 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 18 13:08:28.358493 master-0 kubenswrapper[7146]: I0318 13:08:28.358422 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbksj\" (UniqueName: \"kubernetes.io/projected/9ca94153-9d1a-4b0a-a3eb-556e85f2e875-kube-api-access-hbksj\") pod \"migrator-8487694857-vf6mv\" (UID: \"9ca94153-9d1a-4b0a-a3eb-556e85f2e875\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-vf6mv" Mar 18 13:08:28.358493 master-0 kubenswrapper[7146]: E0318 13:08:28.358455 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-config podName:eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:29.358436421 +0000 UTC m=+18.166653782 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-config") pod "controller-manager-f5df8899c-t4hlq" (UID: "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00") : configmap "config" not found Mar 18 13:08:28.358493 master-0 kubenswrapper[7146]: E0318 13:08:28.358494 7146 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 18 13:08:28.358799 master-0 kubenswrapper[7146]: E0318 13:08:28.358536 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-proxy-ca-bundles podName:eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:29.358512913 +0000 UTC m=+18.166730274 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-proxy-ca-bundles") pod "controller-manager-f5df8899c-t4hlq" (UID: "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00") : configmap "openshift-global-ca" not found Mar 18 13:08:28.359108 master-0 kubenswrapper[7146]: I0318 13:08:28.358856 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-client-ca\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:28.359108 master-0 kubenswrapper[7146]: I0318 13:08:28.358912 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-serving-cert\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:28.359108 master-0 kubenswrapper[7146]: E0318 13:08:28.359028 7146 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:08:28.359108 master-0 kubenswrapper[7146]: E0318 13:08:28.359054 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-serving-cert podName:eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:29.359046858 +0000 UTC m=+18.167264219 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-serving-cert") pod "controller-manager-f5df8899c-t4hlq" (UID: "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00") : secret "serving-cert" not found Mar 18 13:08:28.359108 master-0 kubenswrapper[7146]: E0318 13:08:28.359080 7146 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:08:28.359108 master-0 kubenswrapper[7146]: E0318 13:08:28.359096 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-client-ca podName:eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:29.359091409 +0000 UTC m=+18.167308760 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-client-ca") pod "controller-manager-f5df8899c-t4hlq" (UID: "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00") : configmap "client-ca" not found Mar 18 13:08:28.461018 master-0 kubenswrapper[7146]: I0318 13:08:28.459795 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbksj\" (UniqueName: \"kubernetes.io/projected/9ca94153-9d1a-4b0a-a3eb-556e85f2e875-kube-api-access-hbksj\") pod \"migrator-8487694857-vf6mv\" (UID: \"9ca94153-9d1a-4b0a-a3eb-556e85f2e875\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-vf6mv" Mar 18 13:08:28.487011 master-0 kubenswrapper[7146]: I0318 13:08:28.486930 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbksj\" (UniqueName: \"kubernetes.io/projected/9ca94153-9d1a-4b0a-a3eb-556e85f2e875-kube-api-access-hbksj\") pod \"migrator-8487694857-vf6mv\" (UID: \"9ca94153-9d1a-4b0a-a3eb-556e85f2e875\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-vf6mv" Mar 18 13:08:28.538676 master-0 kubenswrapper[7146]: I0318 13:08:28.538630 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-t4hlq"] Mar 18 13:08:28.539503 master-0 kubenswrapper[7146]: E0318 13:08:28.538929 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" podUID="eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00" Mar 18 13:08:28.539503 master-0 kubenswrapper[7146]: I0318 13:08:28.539475 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-756d974757-dktsh"] Mar 18 13:08:28.540032 master-0 kubenswrapper[7146]: I0318 13:08:28.540007 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:28.543114 master-0 kubenswrapper[7146]: I0318 13:08:28.542863 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-67dcd4998-cwpkz_16a930da-d793-486f-bcef-cf042d3c427d/cluster-olm-operator/0.log" Mar 18 13:08:28.548867 master-0 kubenswrapper[7146]: I0318 13:08:28.544602 7146 generic.go:334] "Generic (PLEG): container finished" podID="16a930da-d793-486f-bcef-cf042d3c427d" containerID="05341255eca6050db7cb2260fb5dd7a45d91c7026314e974f2c6b81b9259883f" exitCode=255 Mar 18 13:08:28.548867 master-0 kubenswrapper[7146]: I0318 13:08:28.544635 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" event={"ID":"16a930da-d793-486f-bcef-cf042d3c427d","Type":"ContainerDied","Data":"05341255eca6050db7cb2260fb5dd7a45d91c7026314e974f2c6b81b9259883f"} Mar 18 13:08:28.548867 master-0 kubenswrapper[7146]: I0318 13:08:28.544974 7146 scope.go:117] "RemoveContainer" containerID="05341255eca6050db7cb2260fb5dd7a45d91c7026314e974f2c6b81b9259883f" Mar 18 13:08:28.548867 master-0 kubenswrapper[7146]: I0318 13:08:28.545443 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 13:08:28.548867 master-0 kubenswrapper[7146]: I0318 13:08:28.545614 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 13:08:28.548867 master-0 kubenswrapper[7146]: I0318 13:08:28.545751 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 13:08:28.548867 master-0 kubenswrapper[7146]: I0318 13:08:28.545905 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 13:08:28.548867 master-0 kubenswrapper[7146]: I0318 13:08:28.546062 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 13:08:28.557988 master-0 kubenswrapper[7146]: I0318 13:08:28.556352 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-756d974757-dktsh"] Mar 18 13:08:28.565487 master-0 kubenswrapper[7146]: I0318 13:08:28.561101 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:08:28.565487 master-0 kubenswrapper[7146]: I0318 13:08:28.561461 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:28.565487 master-0 kubenswrapper[7146]: I0318 13:08:28.561772 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:08:28.565487 master-0 kubenswrapper[7146]: I0318 13:08:28.562030 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:08:28.565487 master-0 kubenswrapper[7146]: I0318 13:08:28.563111 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:08:28.565487 master-0 kubenswrapper[7146]: I0318 13:08:28.563521 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:08:28.601082 master-0 kubenswrapper[7146]: I0318 13:08:28.601029 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-vf6mv" Mar 18 13:08:28.656132 master-0 kubenswrapper[7146]: W0318 13:08:28.655224 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod162e25c0_761c_4414_8c29_f6931afdb7b2.slice/crio-b673179a522c0bd8e3a6cee919a5e39aa033f6535630e64de88d9e832bdf7a59 WatchSource:0}: Error finding container b673179a522c0bd8e3a6cee919a5e39aa033f6535630e64de88d9e832bdf7a59: Status 404 returned error can't find the container with id b673179a522c0bd8e3a6cee919a5e39aa033f6535630e64de88d9e832bdf7a59 Mar 18 13:08:28.667530 master-0 kubenswrapper[7146]: I0318 13:08:28.661516 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-config\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:28.667530 master-0 kubenswrapper[7146]: I0318 13:08:28.661604 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld6x5\" (UniqueName: \"kubernetes.io/projected/3f72ae13-ec95-41c8-8d27-b83b69db104b-kube-api-access-ld6x5\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:28.667530 master-0 kubenswrapper[7146]: I0318 13:08:28.661628 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f72ae13-ec95-41c8-8d27-b83b69db104b-serving-cert\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:28.667530 master-0 kubenswrapper[7146]: I0318 13:08:28.661666 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-client-ca\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:28.764119 master-0 kubenswrapper[7146]: I0318 13:08:28.762364 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f72ae13-ec95-41c8-8d27-b83b69db104b-serving-cert\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:28.764119 master-0 kubenswrapper[7146]: I0318 13:08:28.762493 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-client-ca\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:28.764119 master-0 kubenswrapper[7146]: I0318 13:08:28.762671 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-config\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:28.764119 master-0 kubenswrapper[7146]: I0318 13:08:28.762792 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld6x5\" (UniqueName: \"kubernetes.io/projected/3f72ae13-ec95-41c8-8d27-b83b69db104b-kube-api-access-ld6x5\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:28.764119 master-0 kubenswrapper[7146]: E0318 13:08:28.763090 7146 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:08:28.764119 master-0 kubenswrapper[7146]: E0318 13:08:28.763238 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-client-ca podName:3f72ae13-ec95-41c8-8d27-b83b69db104b nodeName:}" failed. No retries permitted until 2026-03-18 13:08:29.263223599 +0000 UTC m=+18.071440950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-client-ca") pod "route-controller-manager-756d974757-dktsh" (UID: "3f72ae13-ec95-41c8-8d27-b83b69db104b") : configmap "client-ca" not found Mar 18 13:08:28.764119 master-0 kubenswrapper[7146]: E0318 13:08:28.763672 7146 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:08:28.764119 master-0 kubenswrapper[7146]: E0318 13:08:28.763699 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f72ae13-ec95-41c8-8d27-b83b69db104b-serving-cert podName:3f72ae13-ec95-41c8-8d27-b83b69db104b nodeName:}" failed. No retries permitted until 2026-03-18 13:08:29.263691422 +0000 UTC m=+18.071908773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3f72ae13-ec95-41c8-8d27-b83b69db104b-serving-cert") pod "route-controller-manager-756d974757-dktsh" (UID: "3f72ae13-ec95-41c8-8d27-b83b69db104b") : secret "serving-cert" not found Mar 18 13:08:28.764981 master-0 kubenswrapper[7146]: I0318 13:08:28.764873 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-config\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:28.815351 master-0 kubenswrapper[7146]: I0318 13:08:28.815319 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld6x5\" (UniqueName: \"kubernetes.io/projected/3f72ae13-ec95-41c8-8d27-b83b69db104b-kube-api-access-ld6x5\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:28.987701 master-0 kubenswrapper[7146]: I0318 13:08:28.987634 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz"] Mar 18 13:08:29.065899 master-0 kubenswrapper[7146]: I0318 13:08:29.065845 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-bqbzx"] Mar 18 13:08:29.067767 master-0 kubenswrapper[7146]: I0318 13:08:29.067743 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f"] Mar 18 13:08:29.074315 master-0 kubenswrapper[7146]: W0318 13:08:29.074277 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73c93ee3_cf14_4fea_b2a7_ccfb56e55be4.slice/crio-350c4fb60f4e9bdb03e757c1222dc19a3a32f7097be5c0e8e5c054e3859ca25c WatchSource:0}: Error finding container 350c4fb60f4e9bdb03e757c1222dc19a3a32f7097be5c0e8e5c054e3859ca25c: Status 404 returned error can't find the container with id 350c4fb60f4e9bdb03e757c1222dc19a3a32f7097be5c0e8e5c054e3859ca25c Mar 18 13:08:29.075715 master-0 kubenswrapper[7146]: W0318 13:08:29.075571 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda6a763d_2777_40c4_ae1f_c77ced406ea2.slice/crio-8eb2fe8ff8be73af78d1650987f57fe06fd99e27a3b3400525c12b3ce524c93c WatchSource:0}: Error finding container 8eb2fe8ff8be73af78d1650987f57fe06fd99e27a3b3400525c12b3ce524c93c: Status 404 returned error can't find the container with id 8eb2fe8ff8be73af78d1650987f57fe06fd99e27a3b3400525c12b3ce524c93c Mar 18 13:08:29.145904 master-0 kubenswrapper[7146]: I0318 13:08:29.145850 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb"] Mar 18 13:08:29.163084 master-0 kubenswrapper[7146]: W0318 13:08:29.163026 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2b92a53_0b61_4e1d_a306_f9a498e48b38.slice/crio-55d7b7fe63240a7ae9d576fcdf869561b098f0744d49a92d18613fdfb73c8a23 WatchSource:0}: Error finding container 55d7b7fe63240a7ae9d576fcdf869561b098f0744d49a92d18613fdfb73c8a23: Status 404 returned error can't find the container with id 55d7b7fe63240a7ae9d576fcdf869561b098f0744d49a92d18613fdfb73c8a23 Mar 18 13:08:29.164889 master-0 kubenswrapper[7146]: I0318 13:08:29.164748 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8"] Mar 18 13:08:29.169634 master-0 kubenswrapper[7146]: I0318 13:08:29.169576 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-vf6mv"] Mar 18 13:08:29.174778 master-0 kubenswrapper[7146]: W0318 13:08:29.174746 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda01c92f5_7938_437d_8262_11598bd8023c.slice/crio-c2ec5cc34fdd3560c731a9122a146c883bd92213bb1def0bc3e3795f4b6dca24 WatchSource:0}: Error finding container c2ec5cc34fdd3560c731a9122a146c883bd92213bb1def0bc3e3795f4b6dca24: Status 404 returned error can't find the container with id c2ec5cc34fdd3560c731a9122a146c883bd92213bb1def0bc3e3795f4b6dca24 Mar 18 13:08:29.189299 master-0 kubenswrapper[7146]: W0318 13:08:29.189261 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ca94153_9d1a_4b0a_a3eb_556e85f2e875.slice/crio-d36791810cb2ff2b559bc157f15e244f7a2e4ce2859637a7bd7a82ed7e5c1136 WatchSource:0}: Error finding container d36791810cb2ff2b559bc157f15e244f7a2e4ce2859637a7bd7a82ed7e5c1136: Status 404 returned error can't find the container with id d36791810cb2ff2b559bc157f15e244f7a2e4ce2859637a7bd7a82ed7e5c1136 Mar 18 13:08:29.267473 master-0 kubenswrapper[7146]: I0318 13:08:29.267301 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-client-ca\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:29.267473 master-0 kubenswrapper[7146]: I0318 13:08:29.267466 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f72ae13-ec95-41c8-8d27-b83b69db104b-serving-cert\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:29.267903 master-0 kubenswrapper[7146]: E0318 13:08:29.267594 7146 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:08:29.267903 master-0 kubenswrapper[7146]: E0318 13:08:29.267647 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f72ae13-ec95-41c8-8d27-b83b69db104b-serving-cert podName:3f72ae13-ec95-41c8-8d27-b83b69db104b nodeName:}" failed. No retries permitted until 2026-03-18 13:08:30.267630815 +0000 UTC m=+19.075848186 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3f72ae13-ec95-41c8-8d27-b83b69db104b-serving-cert") pod "route-controller-manager-756d974757-dktsh" (UID: "3f72ae13-ec95-41c8-8d27-b83b69db104b") : secret "serving-cert" not found Mar 18 13:08:29.268539 master-0 kubenswrapper[7146]: E0318 13:08:29.268434 7146 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:08:29.268608 master-0 kubenswrapper[7146]: E0318 13:08:29.268597 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-client-ca podName:3f72ae13-ec95-41c8-8d27-b83b69db104b nodeName:}" failed. No retries permitted until 2026-03-18 13:08:30.268578492 +0000 UTC m=+19.076795853 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-client-ca") pod "route-controller-manager-756d974757-dktsh" (UID: "3f72ae13-ec95-41c8-8d27-b83b69db104b") : configmap "client-ca" not found Mar 18 13:08:29.370625 master-0 kubenswrapper[7146]: I0318 13:08:29.369539 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-client-ca\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:29.370625 master-0 kubenswrapper[7146]: E0318 13:08:29.369670 7146 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:08:29.370625 master-0 kubenswrapper[7146]: E0318 13:08:29.369722 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-client-ca podName:eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:31.369707202 +0000 UTC m=+20.177924563 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-client-ca") pod "controller-manager-f5df8899c-t4hlq" (UID: "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00") : configmap "client-ca" not found Mar 18 13:08:29.370625 master-0 kubenswrapper[7146]: I0318 13:08:29.370048 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-serving-cert\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:29.370625 master-0 kubenswrapper[7146]: I0318 13:08:29.370096 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-config\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:29.370625 master-0 kubenswrapper[7146]: I0318 13:08:29.370129 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:29.370625 master-0 kubenswrapper[7146]: E0318 13:08:29.370477 7146 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:08:29.371551 master-0 kubenswrapper[7146]: E0318 13:08:29.371288 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-serving-cert podName:eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:31.371268395 +0000 UTC m=+20.179485756 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-serving-cert") pod "controller-manager-f5df8899c-t4hlq" (UID: "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00") : secret "serving-cert" not found Mar 18 13:08:29.372979 master-0 kubenswrapper[7146]: I0318 13:08:29.372303 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-config\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:29.372979 master-0 kubenswrapper[7146]: I0318 13:08:29.372699 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:29.549042 master-0 kubenswrapper[7146]: I0318 13:08:29.548915 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" event={"ID":"cb471665-2b07-48df-9881-3fb663390b23","Type":"ContainerStarted","Data":"68c5ffa759fcc437f54d7bd3e789e8c2d2ddd9ad3679a98335c6cd2c8429c33c"} Mar 18 13:08:29.555743 master-0 kubenswrapper[7146]: I0318 13:08:29.555697 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-67dcd4998-cwpkz_16a930da-d793-486f-bcef-cf042d3c427d/cluster-olm-operator/0.log" Mar 18 13:08:29.559775 master-0 kubenswrapper[7146]: I0318 13:08:29.559721 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" event={"ID":"16a930da-d793-486f-bcef-cf042d3c427d","Type":"ContainerStarted","Data":"3683163827a4edece2407b15e519e57ed5810d9901b275e4063ae3e6c8a46a7c"} Mar 18 13:08:29.562257 master-0 kubenswrapper[7146]: I0318 13:08:29.562224 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-vf6mv" event={"ID":"9ca94153-9d1a-4b0a-a3eb-556e85f2e875","Type":"ContainerStarted","Data":"d36791810cb2ff2b559bc157f15e244f7a2e4ce2859637a7bd7a82ed7e5c1136"} Mar 18 13:08:29.571324 master-0 kubenswrapper[7146]: I0318 13:08:29.571117 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" event={"ID":"f2b92a53-0b61-4e1d-a306-f9a498e48b38","Type":"ContainerStarted","Data":"55d7b7fe63240a7ae9d576fcdf869561b098f0744d49a92d18613fdfb73c8a23"} Mar 18 13:08:29.572438 master-0 kubenswrapper[7146]: I0318 13:08:29.572400 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" event={"ID":"a01c92f5-7938-437d-8262-11598bd8023c","Type":"ContainerStarted","Data":"c2ec5cc34fdd3560c731a9122a146c883bd92213bb1def0bc3e3795f4b6dca24"} Mar 18 13:08:29.585238 master-0 kubenswrapper[7146]: I0318 13:08:29.585159 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" event={"ID":"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4","Type":"ContainerStarted","Data":"350c4fb60f4e9bdb03e757c1222dc19a3a32f7097be5c0e8e5c054e3859ca25c"} Mar 18 13:08:29.586778 master-0 kubenswrapper[7146]: I0318 13:08:29.586735 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" event={"ID":"da6a763d-2777-40c4-ae1f-c77ced406ea2","Type":"ContainerStarted","Data":"8eb2fe8ff8be73af78d1650987f57fe06fd99e27a3b3400525c12b3ce524c93c"} Mar 18 13:08:29.588135 master-0 kubenswrapper[7146]: I0318 13:08:29.588086 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" event={"ID":"162e25c0-761c-4414-8c29-f6931afdb7b2","Type":"ContainerStarted","Data":"b673179a522c0bd8e3a6cee919a5e39aa033f6535630e64de88d9e832bdf7a59"} Mar 18 13:08:29.589030 master-0 kubenswrapper[7146]: I0318 13:08:29.589000 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:29.589392 master-0 kubenswrapper[7146]: I0318 13:08:29.589368 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" event={"ID":"369e9689-e2f6-4276-b096-8db094f8d6ae","Type":"ContainerStarted","Data":"1f5a6ee5a82f28ebea2649b710d2502f72b2b11fe536e2a60ed0b6577c615a5e"} Mar 18 13:08:29.608297 master-0 kubenswrapper[7146]: I0318 13:08:29.608245 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:29.777246 master-0 kubenswrapper[7146]: I0318 13:08:29.777199 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-proxy-ca-bundles\") pod \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " Mar 18 13:08:29.777246 master-0 kubenswrapper[7146]: I0318 13:08:29.777249 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxb68\" (UniqueName: \"kubernetes.io/projected/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-kube-api-access-zxb68\") pod \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " Mar 18 13:08:29.777817 master-0 kubenswrapper[7146]: I0318 13:08:29.777786 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00" (UID: "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:08:29.777886 master-0 kubenswrapper[7146]: I0318 13:08:29.777851 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-config\") pod \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " Mar 18 13:08:29.778878 master-0 kubenswrapper[7146]: I0318 13:08:29.778647 7146 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:29.779571 master-0 kubenswrapper[7146]: I0318 13:08:29.779285 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-config" (OuterVolumeSpecName: "config") pod "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00" (UID: "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:08:29.785970 master-0 kubenswrapper[7146]: I0318 13:08:29.783261 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-kube-api-access-zxb68" (OuterVolumeSpecName: "kube-api-access-zxb68") pod "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00" (UID: "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00"). InnerVolumeSpecName "kube-api-access-zxb68". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:08:29.881280 master-0 kubenswrapper[7146]: I0318 13:08:29.880946 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxb68\" (UniqueName: \"kubernetes.io/projected/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-kube-api-access-zxb68\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:29.881280 master-0 kubenswrapper[7146]: I0318 13:08:29.880991 7146 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:30.287393 master-0 kubenswrapper[7146]: I0318 13:08:30.287304 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f72ae13-ec95-41c8-8d27-b83b69db104b-serving-cert\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:30.287393 master-0 kubenswrapper[7146]: I0318 13:08:30.287361 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-client-ca\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:30.287677 master-0 kubenswrapper[7146]: E0318 13:08:30.287509 7146 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:08:30.287677 master-0 kubenswrapper[7146]: E0318 13:08:30.287626 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f72ae13-ec95-41c8-8d27-b83b69db104b-serving-cert podName:3f72ae13-ec95-41c8-8d27-b83b69db104b nodeName:}" failed. No retries permitted until 2026-03-18 13:08:32.287603759 +0000 UTC m=+21.095821220 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3f72ae13-ec95-41c8-8d27-b83b69db104b-serving-cert") pod "route-controller-manager-756d974757-dktsh" (UID: "3f72ae13-ec95-41c8-8d27-b83b69db104b") : secret "serving-cert" not found Mar 18 13:08:30.287677 master-0 kubenswrapper[7146]: E0318 13:08:30.287663 7146 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:08:30.287804 master-0 kubenswrapper[7146]: E0318 13:08:30.287714 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-client-ca podName:3f72ae13-ec95-41c8-8d27-b83b69db104b nodeName:}" failed. No retries permitted until 2026-03-18 13:08:32.287700001 +0000 UTC m=+21.095917362 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-client-ca") pod "route-controller-manager-756d974757-dktsh" (UID: "3f72ae13-ec95-41c8-8d27-b83b69db104b") : configmap "client-ca" not found Mar 18 13:08:30.596200 master-0 kubenswrapper[7146]: I0318 13:08:30.596083 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:31.397142 master-0 kubenswrapper[7146]: I0318 13:08:31.397042 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-client-ca\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:31.397401 master-0 kubenswrapper[7146]: E0318 13:08:31.397230 7146 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Mar 18 13:08:31.397401 master-0 kubenswrapper[7146]: E0318 13:08:31.397311 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-client-ca podName:eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:35.397291903 +0000 UTC m=+24.205509254 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-client-ca") pod "controller-manager-f5df8899c-t4hlq" (UID: "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00") : object "openshift-controller-manager"/"client-ca" not registered Mar 18 13:08:31.397472 master-0 kubenswrapper[7146]: I0318 13:08:31.397408 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-serving-cert\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:31.397584 master-0 kubenswrapper[7146]: E0318 13:08:31.397546 7146 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Mar 18 13:08:31.397632 master-0 kubenswrapper[7146]: E0318 13:08:31.397622 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-serving-cert podName:eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:35.397603242 +0000 UTC m=+24.205820683 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-serving-cert") pod "controller-manager-f5df8899c-t4hlq" (UID: "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00") : object "openshift-controller-manager"/"serving-cert" not registered Mar 18 13:08:32.359454 master-0 kubenswrapper[7146]: I0318 13:08:32.359383 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f72ae13-ec95-41c8-8d27-b83b69db104b-serving-cert\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:32.359454 master-0 kubenswrapper[7146]: I0318 13:08:32.359451 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-client-ca\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:32.360322 master-0 kubenswrapper[7146]: E0318 13:08:32.359536 7146 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:08:32.360322 master-0 kubenswrapper[7146]: E0318 13:08:32.359603 7146 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:08:32.360322 master-0 kubenswrapper[7146]: E0318 13:08:32.359613 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f72ae13-ec95-41c8-8d27-b83b69db104b-serving-cert podName:3f72ae13-ec95-41c8-8d27-b83b69db104b nodeName:}" failed. No retries permitted until 2026-03-18 13:08:36.359591564 +0000 UTC m=+25.167808925 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3f72ae13-ec95-41c8-8d27-b83b69db104b-serving-cert") pod "route-controller-manager-756d974757-dktsh" (UID: "3f72ae13-ec95-41c8-8d27-b83b69db104b") : secret "serving-cert" not found Mar 18 13:08:32.360322 master-0 kubenswrapper[7146]: E0318 13:08:32.359632 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-client-ca podName:3f72ae13-ec95-41c8-8d27-b83b69db104b nodeName:}" failed. No retries permitted until 2026-03-18 13:08:36.359623365 +0000 UTC m=+25.167840726 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-client-ca") pod "route-controller-manager-756d974757-dktsh" (UID: "3f72ae13-ec95-41c8-8d27-b83b69db104b") : configmap "client-ca" not found Mar 18 13:08:35.408061 master-0 kubenswrapper[7146]: I0318 13:08:35.407642 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-client-ca\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:35.408683 master-0 kubenswrapper[7146]: I0318 13:08:35.408088 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-serving-cert\") pod \"controller-manager-f5df8899c-t4hlq\" (UID: \"eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00\") " pod="openshift-controller-manager/controller-manager-f5df8899c-t4hlq" Mar 18 13:08:35.408683 master-0 kubenswrapper[7146]: E0318 13:08:35.407841 7146 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Mar 18 13:08:35.408683 master-0 kubenswrapper[7146]: E0318 13:08:35.408200 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-client-ca podName:eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:43.408180398 +0000 UTC m=+32.216397759 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-client-ca") pod "controller-manager-f5df8899c-t4hlq" (UID: "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00") : object "openshift-controller-manager"/"client-ca" not registered Mar 18 13:08:35.408683 master-0 kubenswrapper[7146]: E0318 13:08:35.408210 7146 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Mar 18 13:08:35.408683 master-0 kubenswrapper[7146]: E0318 13:08:35.408262 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-serving-cert podName:eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:43.40824592 +0000 UTC m=+32.216463281 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-serving-cert") pod "controller-manager-f5df8899c-t4hlq" (UID: "eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00") : object "openshift-controller-manager"/"serving-cert" not registered Mar 18 13:08:35.569026 master-0 kubenswrapper[7146]: I0318 13:08:35.560637 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-ff8b688b4-t48ff"] Mar 18 13:08:35.569026 master-0 kubenswrapper[7146]: I0318 13:08:35.561183 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:35.569026 master-0 kubenswrapper[7146]: I0318 13:08:35.564002 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:08:35.569026 master-0 kubenswrapper[7146]: I0318 13:08:35.566332 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:08:35.569026 master-0 kubenswrapper[7146]: I0318 13:08:35.566639 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:08:35.569026 master-0 kubenswrapper[7146]: I0318 13:08:35.567539 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:08:35.569026 master-0 kubenswrapper[7146]: I0318 13:08:35.567656 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:08:35.571813 master-0 kubenswrapper[7146]: I0318 13:08:35.571482 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:08:35.610275 master-0 kubenswrapper[7146]: I0318 13:08:35.610238 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-config\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:35.610515 master-0 kubenswrapper[7146]: I0318 13:08:35.610465 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de71e92-9da0-44f7-8d3e-13c4564a6979-serving-cert\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:35.610563 master-0 kubenswrapper[7146]: I0318 13:08:35.610531 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqcz7\" (UniqueName: \"kubernetes.io/projected/4de71e92-9da0-44f7-8d3e-13c4564a6979-kube-api-access-tqcz7\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:35.610627 master-0 kubenswrapper[7146]: I0318 13:08:35.610600 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-proxy-ca-bundles\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:35.610665 master-0 kubenswrapper[7146]: I0318 13:08:35.610640 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-client-ca\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:35.711571 master-0 kubenswrapper[7146]: I0318 13:08:35.711516 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-client-ca\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:35.711798 master-0 kubenswrapper[7146]: I0318 13:08:35.711589 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-config\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:35.711798 master-0 kubenswrapper[7146]: E0318 13:08:35.711722 7146 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:08:35.711886 master-0 kubenswrapper[7146]: E0318 13:08:35.711820 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-client-ca podName:4de71e92-9da0-44f7-8d3e-13c4564a6979 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:36.211797165 +0000 UTC m=+25.020014526 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-client-ca") pod "controller-manager-ff8b688b4-t48ff" (UID: "4de71e92-9da0-44f7-8d3e-13c4564a6979") : configmap "client-ca" not found Mar 18 13:08:35.712032 master-0 kubenswrapper[7146]: I0318 13:08:35.711952 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqcz7\" (UniqueName: \"kubernetes.io/projected/4de71e92-9da0-44f7-8d3e-13c4564a6979-kube-api-access-tqcz7\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:35.712032 master-0 kubenswrapper[7146]: I0318 13:08:35.711986 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de71e92-9da0-44f7-8d3e-13c4564a6979-serving-cert\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:35.712098 master-0 kubenswrapper[7146]: I0318 13:08:35.712046 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-proxy-ca-bundles\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:35.712441 master-0 kubenswrapper[7146]: E0318 13:08:35.712400 7146 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:08:35.712713 master-0 kubenswrapper[7146]: E0318 13:08:35.712678 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4de71e92-9da0-44f7-8d3e-13c4564a6979-serving-cert podName:4de71e92-9da0-44f7-8d3e-13c4564a6979 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:36.212572477 +0000 UTC m=+25.020789838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4de71e92-9da0-44f7-8d3e-13c4564a6979-serving-cert") pod "controller-manager-ff8b688b4-t48ff" (UID: "4de71e92-9da0-44f7-8d3e-13c4564a6979") : secret "serving-cert" not found Mar 18 13:08:35.713165 master-0 kubenswrapper[7146]: I0318 13:08:35.713139 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-config\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:35.713891 master-0 kubenswrapper[7146]: I0318 13:08:35.713852 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-proxy-ca-bundles\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:35.821407 master-0 kubenswrapper[7146]: I0318 13:08:35.821309 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-t4hlq"] Mar 18 13:08:35.821628 master-0 kubenswrapper[7146]: I0318 13:08:35.821616 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-ff8b688b4-t48ff"] Mar 18 13:08:36.266598 master-0 kubenswrapper[7146]: I0318 13:08:36.266482 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de71e92-9da0-44f7-8d3e-13c4564a6979-serving-cert\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:36.267022 master-0 kubenswrapper[7146]: E0318 13:08:36.266654 7146 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:08:36.267022 master-0 kubenswrapper[7146]: E0318 13:08:36.266754 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4de71e92-9da0-44f7-8d3e-13c4564a6979-serving-cert podName:4de71e92-9da0-44f7-8d3e-13c4564a6979 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:37.266733851 +0000 UTC m=+26.074951292 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4de71e92-9da0-44f7-8d3e-13c4564a6979-serving-cert") pod "controller-manager-ff8b688b4-t48ff" (UID: "4de71e92-9da0-44f7-8d3e-13c4564a6979") : secret "serving-cert" not found Mar 18 13:08:36.267022 master-0 kubenswrapper[7146]: I0318 13:08:36.266834 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-client-ca\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:36.267355 master-0 kubenswrapper[7146]: E0318 13:08:36.267069 7146 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:08:36.267355 master-0 kubenswrapper[7146]: E0318 13:08:36.267167 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-client-ca podName:4de71e92-9da0-44f7-8d3e-13c4564a6979 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:37.267146243 +0000 UTC m=+26.075363684 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-client-ca") pod "controller-manager-ff8b688b4-t48ff" (UID: "4de71e92-9da0-44f7-8d3e-13c4564a6979") : configmap "client-ca" not found Mar 18 13:08:36.368498 master-0 kubenswrapper[7146]: I0318 13:08:36.368420 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f72ae13-ec95-41c8-8d27-b83b69db104b-serving-cert\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:36.368725 master-0 kubenswrapper[7146]: E0318 13:08:36.368610 7146 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:08:36.368725 master-0 kubenswrapper[7146]: E0318 13:08:36.368703 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f72ae13-ec95-41c8-8d27-b83b69db104b-serving-cert podName:3f72ae13-ec95-41c8-8d27-b83b69db104b nodeName:}" failed. No retries permitted until 2026-03-18 13:08:44.368685687 +0000 UTC m=+33.176903048 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3f72ae13-ec95-41c8-8d27-b83b69db104b-serving-cert") pod "route-controller-manager-756d974757-dktsh" (UID: "3f72ae13-ec95-41c8-8d27-b83b69db104b") : secret "serving-cert" not found Mar 18 13:08:36.368811 master-0 kubenswrapper[7146]: I0318 13:08:36.368743 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-client-ca\") pod \"route-controller-manager-756d974757-dktsh\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:36.369036 master-0 kubenswrapper[7146]: E0318 13:08:36.368995 7146 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:08:36.369112 master-0 kubenswrapper[7146]: E0318 13:08:36.369077 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-client-ca podName:3f72ae13-ec95-41c8-8d27-b83b69db104b nodeName:}" failed. No retries permitted until 2026-03-18 13:08:44.369054317 +0000 UTC m=+33.177271718 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-client-ca") pod "route-controller-manager-756d974757-dktsh" (UID: "3f72ae13-ec95-41c8-8d27-b83b69db104b") : configmap "client-ca" not found Mar 18 13:08:36.596240 master-0 kubenswrapper[7146]: I0318 13:08:36.596111 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-t4hlq"] Mar 18 13:08:36.672983 master-0 kubenswrapper[7146]: I0318 13:08:36.671863 7146 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:36.672983 master-0 kubenswrapper[7146]: I0318 13:08:36.671894 7146 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:36.723098 master-0 kubenswrapper[7146]: I0318 13:08:36.723028 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqcz7\" (UniqueName: \"kubernetes.io/projected/4de71e92-9da0-44f7-8d3e-13c4564a6979-kube-api-access-tqcz7\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:36.784752 master-0 kubenswrapper[7146]: I0318 13:08:36.783764 7146 generic.go:334] "Generic (PLEG): container finished" podID="1ad580a2-7f58-4d66-adad-0a53d9777655" containerID="efc42902b5c4767324208b71a30ab164f9e409ceb38a1c7d04d92fd8042f56d6" exitCode=0 Mar 18 13:08:36.784752 master-0 kubenswrapper[7146]: I0318 13:08:36.783807 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" event={"ID":"1ad580a2-7f58-4d66-adad-0a53d9777655","Type":"ContainerDied","Data":"efc42902b5c4767324208b71a30ab164f9e409ceb38a1c7d04d92fd8042f56d6"} Mar 18 13:08:36.784752 master-0 kubenswrapper[7146]: I0318 13:08:36.784167 7146 scope.go:117] "RemoveContainer" containerID="efc42902b5c4767324208b71a30ab164f9e409ceb38a1c7d04d92fd8042f56d6" Mar 18 13:08:37.279522 master-0 kubenswrapper[7146]: I0318 13:08:37.279454 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de71e92-9da0-44f7-8d3e-13c4564a6979-serving-cert\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:37.279793 master-0 kubenswrapper[7146]: I0318 13:08:37.279535 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-client-ca\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:37.279793 master-0 kubenswrapper[7146]: E0318 13:08:37.279620 7146 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:08:37.279793 master-0 kubenswrapper[7146]: E0318 13:08:37.279650 7146 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:08:37.279793 master-0 kubenswrapper[7146]: E0318 13:08:37.279676 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-client-ca podName:4de71e92-9da0-44f7-8d3e-13c4564a6979 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:39.279656992 +0000 UTC m=+28.087874353 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-client-ca") pod "controller-manager-ff8b688b4-t48ff" (UID: "4de71e92-9da0-44f7-8d3e-13c4564a6979") : configmap "client-ca" not found Mar 18 13:08:37.279793 master-0 kubenswrapper[7146]: E0318 13:08:37.279737 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4de71e92-9da0-44f7-8d3e-13c4564a6979-serving-cert podName:4de71e92-9da0-44f7-8d3e-13c4564a6979 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:39.279724864 +0000 UTC m=+28.087942225 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4de71e92-9da0-44f7-8d3e-13c4564a6979-serving-cert") pod "controller-manager-ff8b688b4-t48ff" (UID: "4de71e92-9da0-44f7-8d3e-13c4564a6979") : secret "serving-cert" not found Mar 18 13:08:37.362803 master-0 kubenswrapper[7146]: I0318 13:08:37.362743 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00" path="/var/lib/kubelet/pods/eed8aa0d-9c95-44b1-97f3-b45ca9c8fd00/volumes" Mar 18 13:08:38.027849 master-0 kubenswrapper[7146]: I0318 13:08:38.025251 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22"] Mar 18 13:08:38.027849 master-0 kubenswrapper[7146]: I0318 13:08:38.026088 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.042656 master-0 kubenswrapper[7146]: I0318 13:08:38.039718 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 13:08:38.042656 master-0 kubenswrapper[7146]: I0318 13:08:38.040224 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 13:08:38.042656 master-0 kubenswrapper[7146]: I0318 13:08:38.040549 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 13:08:38.042656 master-0 kubenswrapper[7146]: I0318 13:08:38.041429 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 13:08:38.042656 master-0 kubenswrapper[7146]: I0318 13:08:38.041601 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 13:08:38.042656 master-0 kubenswrapper[7146]: I0318 13:08:38.041735 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 13:08:38.042656 master-0 kubenswrapper[7146]: I0318 13:08:38.041840 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 13:08:38.047213 master-0 kubenswrapper[7146]: I0318 13:08:38.047063 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 13:08:38.065612 master-0 kubenswrapper[7146]: I0318 13:08:38.065001 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22"] Mar 18 13:08:38.101962 master-0 kubenswrapper[7146]: I0318 13:08:38.098376 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-trusted-ca-bundle\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.101962 master-0 kubenswrapper[7146]: I0318 13:08:38.098455 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-etcd-client\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.101962 master-0 kubenswrapper[7146]: I0318 13:08:38.098517 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-audit-dir\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.101962 master-0 kubenswrapper[7146]: I0318 13:08:38.098544 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-audit-policies\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.101962 master-0 kubenswrapper[7146]: I0318 13:08:38.098565 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-etcd-serving-ca\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.101962 master-0 kubenswrapper[7146]: I0318 13:08:38.098579 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-serving-cert\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.101962 master-0 kubenswrapper[7146]: I0318 13:08:38.098599 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-encryption-config\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.101962 master-0 kubenswrapper[7146]: I0318 13:08:38.098614 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b29z\" (UniqueName: \"kubernetes.io/projected/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-kube-api-access-7b29z\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.190073 master-0 kubenswrapper[7146]: I0318 13:08:38.189103 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 13:08:38.190073 master-0 kubenswrapper[7146]: I0318 13:08:38.189596 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:08:38.203005 master-0 kubenswrapper[7146]: I0318 13:08:38.202318 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-etcd-client\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.203005 master-0 kubenswrapper[7146]: I0318 13:08:38.202398 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-audit-dir\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.203005 master-0 kubenswrapper[7146]: I0318 13:08:38.202428 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-audit-policies\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.203005 master-0 kubenswrapper[7146]: I0318 13:08:38.202450 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-etcd-serving-ca\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.203005 master-0 kubenswrapper[7146]: I0318 13:08:38.202464 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-serving-cert\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.203005 master-0 kubenswrapper[7146]: I0318 13:08:38.202483 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-encryption-config\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.203005 master-0 kubenswrapper[7146]: I0318 13:08:38.202498 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b29z\" (UniqueName: \"kubernetes.io/projected/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-kube-api-access-7b29z\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.203005 master-0 kubenswrapper[7146]: I0318 13:08:38.202548 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-trusted-ca-bundle\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.203438 master-0 kubenswrapper[7146]: I0318 13:08:38.203097 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-trusted-ca-bundle\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.205750 master-0 kubenswrapper[7146]: I0318 13:08:38.204450 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-etcd-serving-ca\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.205750 master-0 kubenswrapper[7146]: I0318 13:08:38.204817 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-audit-dir\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.205750 master-0 kubenswrapper[7146]: I0318 13:08:38.205235 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-audit-policies\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.214967 master-0 kubenswrapper[7146]: I0318 13:08:38.214551 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-etcd-client\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.219979 master-0 kubenswrapper[7146]: I0318 13:08:38.216312 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-encryption-config\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.219979 master-0 kubenswrapper[7146]: I0318 13:08:38.216656 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-serving-cert\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.225930 master-0 kubenswrapper[7146]: W0318 13:08:38.225263 7146 reflector.go:561] object-"openshift-kube-scheduler"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler": no relationship found between node 'master-0' and this object Mar 18 13:08:38.225930 master-0 kubenswrapper[7146]: E0318 13:08:38.225323 7146 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-scheduler\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 13:08:38.268118 master-0 kubenswrapper[7146]: I0318 13:08:38.267715 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 13:08:38.297981 master-0 kubenswrapper[7146]: I0318 13:08:38.297101 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b29z\" (UniqueName: \"kubernetes.io/projected/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-kube-api-access-7b29z\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.304027 master-0 kubenswrapper[7146]: I0318 13:08:38.303749 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-var-lock\") pod \"installer-1-master-0\" (UID: \"fa67c544-d918-4ccf-a3a9-ffbfafe3c397\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:08:38.304027 master-0 kubenswrapper[7146]: I0318 13:08:38.303876 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fa67c544-d918-4ccf-a3a9-ffbfafe3c397\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:08:38.304027 master-0 kubenswrapper[7146]: I0318 13:08:38.303955 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"fa67c544-d918-4ccf-a3a9-ffbfafe3c397\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:08:38.370948 master-0 kubenswrapper[7146]: I0318 13:08:38.370394 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:38.413028 master-0 kubenswrapper[7146]: I0318 13:08:38.408130 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fa67c544-d918-4ccf-a3a9-ffbfafe3c397\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:08:38.413028 master-0 kubenswrapper[7146]: I0318 13:08:38.408239 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"fa67c544-d918-4ccf-a3a9-ffbfafe3c397\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:08:38.413028 master-0 kubenswrapper[7146]: I0318 13:08:38.408272 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-var-lock\") pod \"installer-1-master-0\" (UID: \"fa67c544-d918-4ccf-a3a9-ffbfafe3c397\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:08:38.413028 master-0 kubenswrapper[7146]: I0318 13:08:38.408410 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-var-lock\") pod \"installer-1-master-0\" (UID: \"fa67c544-d918-4ccf-a3a9-ffbfafe3c397\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:08:38.413028 master-0 kubenswrapper[7146]: I0318 13:08:38.408673 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"fa67c544-d918-4ccf-a3a9-ffbfafe3c397\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:08:38.698957 master-0 kubenswrapper[7146]: I0318 13:08:38.698319 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-ff8b688b4-t48ff"] Mar 18 13:08:38.698957 master-0 kubenswrapper[7146]: E0318 13:08:38.698893 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" podUID="4de71e92-9da0-44f7-8d3e-13c4564a6979" Mar 18 13:08:38.700491 master-0 kubenswrapper[7146]: I0318 13:08:38.700328 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-756d974757-dktsh"] Mar 18 13:08:38.700622 master-0 kubenswrapper[7146]: E0318 13:08:38.700570 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" podUID="3f72ae13-ec95-41c8-8d27-b83b69db104b" Mar 18 13:08:38.714730 master-0 kubenswrapper[7146]: I0318 13:08:38.714643 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22"] Mar 18 13:08:38.797048 master-0 kubenswrapper[7146]: I0318 13:08:38.796874 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" event={"ID":"0213214b-693b-411b-8254-48d7826011eb","Type":"ContainerStarted","Data":"c078d45f41d868996e6ecf51daad3770f6b4c7185d981080d710f8cb1c0e4347"} Mar 18 13:08:38.797276 master-0 kubenswrapper[7146]: I0318 13:08:38.797123 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:08:38.802584 master-0 kubenswrapper[7146]: I0318 13:08:38.802545 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" event={"ID":"162e25c0-761c-4414-8c29-f6931afdb7b2","Type":"ContainerStarted","Data":"e7673dd2ae43e04c7d4a1ececab611ded646b854263ba59081f823bf97a0caca"} Mar 18 13:08:38.808502 master-0 kubenswrapper[7146]: I0318 13:08:38.808116 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8" event={"ID":"5bccf60c-5b07-4f40-8430-12bfb62661c7","Type":"ContainerStarted","Data":"a2ae2420b34ef246b54f0a6fe9ec2894bc3cd6d0edd11b8cc50a2c6c8fb9ff32"} Mar 18 13:08:38.822930 master-0 kubenswrapper[7146]: I0318 13:08:38.822843 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:38.823381 master-0 kubenswrapper[7146]: I0318 13:08:38.823353 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" event={"ID":"1ad580a2-7f58-4d66-adad-0a53d9777655","Type":"ContainerStarted","Data":"9d80034b295c4c336556d93672546628c76e7f2de665797ca7d2385c75fae222"} Mar 18 13:08:38.823533 master-0 kubenswrapper[7146]: I0318 13:08:38.823491 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:38.839535 master-0 kubenswrapper[7146]: I0318 13:08:38.839483 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:38.853731 master-0 kubenswrapper[7146]: I0318 13:08:38.853689 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:38.927733 master-0 kubenswrapper[7146]: I0318 13:08:38.925146 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-config\") pod \"3f72ae13-ec95-41c8-8d27-b83b69db104b\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " Mar 18 13:08:38.927733 master-0 kubenswrapper[7146]: I0318 13:08:38.925224 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-proxy-ca-bundles\") pod \"4de71e92-9da0-44f7-8d3e-13c4564a6979\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " Mar 18 13:08:38.927733 master-0 kubenswrapper[7146]: I0318 13:08:38.925270 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqcz7\" (UniqueName: \"kubernetes.io/projected/4de71e92-9da0-44f7-8d3e-13c4564a6979-kube-api-access-tqcz7\") pod \"4de71e92-9da0-44f7-8d3e-13c4564a6979\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " Mar 18 13:08:38.927733 master-0 kubenswrapper[7146]: I0318 13:08:38.925293 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-config\") pod \"4de71e92-9da0-44f7-8d3e-13c4564a6979\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " Mar 18 13:08:38.927733 master-0 kubenswrapper[7146]: I0318 13:08:38.925325 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld6x5\" (UniqueName: \"kubernetes.io/projected/3f72ae13-ec95-41c8-8d27-b83b69db104b-kube-api-access-ld6x5\") pod \"3f72ae13-ec95-41c8-8d27-b83b69db104b\" (UID: \"3f72ae13-ec95-41c8-8d27-b83b69db104b\") " Mar 18 13:08:38.931952 master-0 kubenswrapper[7146]: I0318 13:08:38.930150 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4de71e92-9da0-44f7-8d3e-13c4564a6979" (UID: "4de71e92-9da0-44f7-8d3e-13c4564a6979"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:08:38.931952 master-0 kubenswrapper[7146]: I0318 13:08:38.930289 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-config" (OuterVolumeSpecName: "config") pod "3f72ae13-ec95-41c8-8d27-b83b69db104b" (UID: "3f72ae13-ec95-41c8-8d27-b83b69db104b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:08:38.942957 master-0 kubenswrapper[7146]: I0318 13:08:38.932305 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-config" (OuterVolumeSpecName: "config") pod "4de71e92-9da0-44f7-8d3e-13c4564a6979" (UID: "4de71e92-9da0-44f7-8d3e-13c4564a6979"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:08:38.942957 master-0 kubenswrapper[7146]: I0318 13:08:38.936150 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de71e92-9da0-44f7-8d3e-13c4564a6979-kube-api-access-tqcz7" (OuterVolumeSpecName: "kube-api-access-tqcz7") pod "4de71e92-9da0-44f7-8d3e-13c4564a6979" (UID: "4de71e92-9da0-44f7-8d3e-13c4564a6979"). InnerVolumeSpecName "kube-api-access-tqcz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:08:38.942957 master-0 kubenswrapper[7146]: I0318 13:08:38.942299 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f72ae13-ec95-41c8-8d27-b83b69db104b-kube-api-access-ld6x5" (OuterVolumeSpecName: "kube-api-access-ld6x5") pod "3f72ae13-ec95-41c8-8d27-b83b69db104b" (UID: "3f72ae13-ec95-41c8-8d27-b83b69db104b"). InnerVolumeSpecName "kube-api-access-ld6x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:08:39.029898 master-0 kubenswrapper[7146]: I0318 13:08:39.027039 7146 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:39.029898 master-0 kubenswrapper[7146]: I0318 13:08:39.027525 7146 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:39.029898 master-0 kubenswrapper[7146]: I0318 13:08:39.027612 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqcz7\" (UniqueName: \"kubernetes.io/projected/4de71e92-9da0-44f7-8d3e-13c4564a6979-kube-api-access-tqcz7\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:39.029898 master-0 kubenswrapper[7146]: I0318 13:08:39.027622 7146 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:39.029898 master-0 kubenswrapper[7146]: I0318 13:08:39.027636 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld6x5\" (UniqueName: \"kubernetes.io/projected/3f72ae13-ec95-41c8-8d27-b83b69db104b-kube-api-access-ld6x5\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:39.345225 master-0 kubenswrapper[7146]: I0318 13:08:39.340726 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de71e92-9da0-44f7-8d3e-13c4564a6979-serving-cert\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:39.345225 master-0 kubenswrapper[7146]: I0318 13:08:39.340794 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-client-ca\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:39.345225 master-0 kubenswrapper[7146]: I0318 13:08:39.341784 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-client-ca\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:39.360741 master-0 kubenswrapper[7146]: I0318 13:08:39.360646 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de71e92-9da0-44f7-8d3e-13c4564a6979-serving-cert\") pod \"controller-manager-ff8b688b4-t48ff\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:39.455042 master-0 kubenswrapper[7146]: I0318 13:08:39.453240 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-client-ca\") pod \"4de71e92-9da0-44f7-8d3e-13c4564a6979\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " Mar 18 13:08:39.455042 master-0 kubenswrapper[7146]: I0318 13:08:39.453337 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de71e92-9da0-44f7-8d3e-13c4564a6979-serving-cert\") pod \"4de71e92-9da0-44f7-8d3e-13c4564a6979\" (UID: \"4de71e92-9da0-44f7-8d3e-13c4564a6979\") " Mar 18 13:08:39.455042 master-0 kubenswrapper[7146]: I0318 13:08:39.454428 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-client-ca" (OuterVolumeSpecName: "client-ca") pod "4de71e92-9da0-44f7-8d3e-13c4564a6979" (UID: "4de71e92-9da0-44f7-8d3e-13c4564a6979"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:08:39.470225 master-0 kubenswrapper[7146]: I0318 13:08:39.468279 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de71e92-9da0-44f7-8d3e-13c4564a6979-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4de71e92-9da0-44f7-8d3e-13c4564a6979" (UID: "4de71e92-9da0-44f7-8d3e-13c4564a6979"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:08:39.470225 master-0 kubenswrapper[7146]: E0318 13:08:39.468375 7146 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:08:39.470225 master-0 kubenswrapper[7146]: E0318 13:08:39.468397 7146 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-1-master-0: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:08:39.470225 master-0 kubenswrapper[7146]: E0318 13:08:39.468461 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-kube-api-access podName:fa67c544-d918-4ccf-a3a9-ffbfafe3c397 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:39.968442825 +0000 UTC m=+28.776660186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-kube-api-access") pod "installer-1-master-0" (UID: "fa67c544-d918-4ccf-a3a9-ffbfafe3c397") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:08:39.514569 master-0 kubenswrapper[7146]: I0318 13:08:39.514521 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-6774666ccc-2b2qz"] Mar 18 13:08:39.515443 master-0 kubenswrapper[7146]: I0318 13:08:39.515386 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.548808 master-0 kubenswrapper[7146]: I0318 13:08:39.547366 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 13:08:39.548808 master-0 kubenswrapper[7146]: I0318 13:08:39.547632 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 13:08:39.548808 master-0 kubenswrapper[7146]: I0318 13:08:39.547848 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 13:08:39.553368 master-0 kubenswrapper[7146]: I0318 13:08:39.553331 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 13:08:39.555076 master-0 kubenswrapper[7146]: I0318 13:08:39.555045 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz"] Mar 18 13:08:39.555657 master-0 kubenswrapper[7146]: I0318 13:08:39.555629 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.557132 master-0 kubenswrapper[7146]: I0318 13:08:39.556217 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 13:08:39.557132 master-0 kubenswrapper[7146]: I0318 13:08:39.557090 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 13:08:39.558186 master-0 kubenswrapper[7146]: I0318 13:08:39.558137 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 13:08:39.558477 master-0 kubenswrapper[7146]: I0318 13:08:39.558396 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 13:08:39.558477 master-0 kubenswrapper[7146]: I0318 13:08:39.558463 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 18 13:08:39.558581 master-0 kubenswrapper[7146]: I0318 13:08:39.558504 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 18 13:08:39.558581 master-0 kubenswrapper[7146]: I0318 13:08:39.558539 7146 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de71e92-9da0-44f7-8d3e-13c4564a6979-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:39.558581 master-0 kubenswrapper[7146]: I0318 13:08:39.558574 7146 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4de71e92-9da0-44f7-8d3e-13c4564a6979-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:39.563373 master-0 kubenswrapper[7146]: I0318 13:08:39.563320 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6774666ccc-2b2qz"] Mar 18 13:08:39.567773 master-0 kubenswrapper[7146]: I0318 13:08:39.567728 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 13:08:39.568150 master-0 kubenswrapper[7146]: I0318 13:08:39.568060 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 13:08:39.568150 master-0 kubenswrapper[7146]: I0318 13:08:39.568088 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 13:08:39.584298 master-0 kubenswrapper[7146]: I0318 13:08:39.579682 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 13:08:39.584298 master-0 kubenswrapper[7146]: I0318 13:08:39.583271 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz"] Mar 18 13:08:39.663618 master-0 kubenswrapper[7146]: I0318 13:08:39.663531 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/234a5a6c-3790-49d0-b1e7-86f81048d96a-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.663869 master-0 kubenswrapper[7146]: I0318 13:08:39.663643 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-serving-cert\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.663869 master-0 kubenswrapper[7146]: I0318 13:08:39.663670 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c768c562-0c15-4f8e-83e0-14261a061341-audit-dir\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.663869 master-0 kubenswrapper[7146]: I0318 13:08:39.663709 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp5xj\" (UniqueName: \"kubernetes.io/projected/234a5a6c-3790-49d0-b1e7-86f81048d96a-kube-api-access-pp5xj\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.663869 master-0 kubenswrapper[7146]: I0318 13:08:39.663735 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-encryption-config\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.663869 master-0 kubenswrapper[7146]: I0318 13:08:39.663759 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-config\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.663869 master-0 kubenswrapper[7146]: I0318 13:08:39.663790 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/234a5a6c-3790-49d0-b1e7-86f81048d96a-cache\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.663869 master-0 kubenswrapper[7146]: I0318 13:08:39.663808 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-image-import-ca\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.663869 master-0 kubenswrapper[7146]: I0318 13:08:39.663849 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/234a5a6c-3790-49d0-b1e7-86f81048d96a-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.664125 master-0 kubenswrapper[7146]: I0318 13:08:39.663874 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/234a5a6c-3790-49d0-b1e7-86f81048d96a-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.664125 master-0 kubenswrapper[7146]: I0318 13:08:39.663920 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-etcd-client\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.664125 master-0 kubenswrapper[7146]: I0318 13:08:39.663970 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/234a5a6c-3790-49d0-b1e7-86f81048d96a-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.664666 master-0 kubenswrapper[7146]: I0318 13:08:39.664634 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-audit\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.664666 master-0 kubenswrapper[7146]: I0318 13:08:39.664659 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-etcd-serving-ca\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.664758 master-0 kubenswrapper[7146]: I0318 13:08:39.664675 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-trusted-ca-bundle\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.664758 master-0 kubenswrapper[7146]: I0318 13:08:39.664709 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c768c562-0c15-4f8e-83e0-14261a061341-node-pullsecrets\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.664758 master-0 kubenswrapper[7146]: I0318 13:08:39.664734 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jftrt\" (UniqueName: \"kubernetes.io/projected/c768c562-0c15-4f8e-83e0-14261a061341-kube-api-access-jftrt\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.751659 master-0 kubenswrapper[7146]: I0318 13:08:39.751528 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 13:08:39.765681 master-0 kubenswrapper[7146]: I0318 13:08:39.765629 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jftrt\" (UniqueName: \"kubernetes.io/projected/c768c562-0c15-4f8e-83e0-14261a061341-kube-api-access-jftrt\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.765681 master-0 kubenswrapper[7146]: I0318 13:08:39.765691 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/234a5a6c-3790-49d0-b1e7-86f81048d96a-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.765974 master-0 kubenswrapper[7146]: I0318 13:08:39.765729 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-serving-cert\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.765974 master-0 kubenswrapper[7146]: I0318 13:08:39.765748 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c768c562-0c15-4f8e-83e0-14261a061341-audit-dir\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.765974 master-0 kubenswrapper[7146]: I0318 13:08:39.765765 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp5xj\" (UniqueName: \"kubernetes.io/projected/234a5a6c-3790-49d0-b1e7-86f81048d96a-kube-api-access-pp5xj\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.769904 master-0 kubenswrapper[7146]: I0318 13:08:39.769872 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-serving-cert\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.770019 master-0 kubenswrapper[7146]: I0318 13:08:39.769880 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c768c562-0c15-4f8e-83e0-14261a061341-audit-dir\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.770135 master-0 kubenswrapper[7146]: I0318 13:08:39.770080 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-encryption-config\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.770205 master-0 kubenswrapper[7146]: I0318 13:08:39.770179 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-config\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.770264 master-0 kubenswrapper[7146]: I0318 13:08:39.770240 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/234a5a6c-3790-49d0-b1e7-86f81048d96a-cache\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.770312 master-0 kubenswrapper[7146]: I0318 13:08:39.770279 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-image-import-ca\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.770364 master-0 kubenswrapper[7146]: I0318 13:08:39.770324 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/234a5a6c-3790-49d0-b1e7-86f81048d96a-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.770408 master-0 kubenswrapper[7146]: I0318 13:08:39.770398 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/234a5a6c-3790-49d0-b1e7-86f81048d96a-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.770570 master-0 kubenswrapper[7146]: I0318 13:08:39.770548 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-etcd-client\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.770633 master-0 kubenswrapper[7146]: I0318 13:08:39.770619 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/234a5a6c-3790-49d0-b1e7-86f81048d96a-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.770711 master-0 kubenswrapper[7146]: I0318 13:08:39.770629 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/234a5a6c-3790-49d0-b1e7-86f81048d96a-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.773651 master-0 kubenswrapper[7146]: I0318 13:08:39.770820 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-audit\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.773651 master-0 kubenswrapper[7146]: I0318 13:08:39.771639 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-etcd-serving-ca\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.773651 master-0 kubenswrapper[7146]: I0318 13:08:39.771668 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-trusted-ca-bundle\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.773651 master-0 kubenswrapper[7146]: I0318 13:08:39.771711 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c768c562-0c15-4f8e-83e0-14261a061341-node-pullsecrets\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.773651 master-0 kubenswrapper[7146]: I0318 13:08:39.771897 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c768c562-0c15-4f8e-83e0-14261a061341-node-pullsecrets\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.773651 master-0 kubenswrapper[7146]: I0318 13:08:39.771893 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-config\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.773651 master-0 kubenswrapper[7146]: I0318 13:08:39.772682 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-etcd-serving-ca\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.773651 master-0 kubenswrapper[7146]: E0318 13:08:39.773254 7146 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 13:08:39.773651 master-0 kubenswrapper[7146]: E0318 13:08:39.773309 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-audit podName:c768c562-0c15-4f8e-83e0-14261a061341 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:40.273289277 +0000 UTC m=+29.081506638 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-audit") pod "apiserver-6774666ccc-2b2qz" (UID: "c768c562-0c15-4f8e-83e0-14261a061341") : configmap "audit-0" not found Mar 18 13:08:39.773651 master-0 kubenswrapper[7146]: I0318 13:08:39.773479 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-image-import-ca\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.774195 master-0 kubenswrapper[7146]: I0318 13:08:39.773799 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/234a5a6c-3790-49d0-b1e7-86f81048d96a-cache\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.777929 master-0 kubenswrapper[7146]: I0318 13:08:39.777897 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-encryption-config\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.778117 master-0 kubenswrapper[7146]: I0318 13:08:39.778094 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/234a5a6c-3790-49d0-b1e7-86f81048d96a-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.782854 master-0 kubenswrapper[7146]: I0318 13:08:39.782761 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-trusted-ca-bundle\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.790291 master-0 kubenswrapper[7146]: I0318 13:08:39.784433 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-etcd-client\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.793216 master-0 kubenswrapper[7146]: I0318 13:08:39.793103 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/234a5a6c-3790-49d0-b1e7-86f81048d96a-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.800435 master-0 kubenswrapper[7146]: I0318 13:08:39.800383 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/234a5a6c-3790-49d0-b1e7-86f81048d96a-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.803488 master-0 kubenswrapper[7146]: I0318 13:08:39.803455 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jftrt\" (UniqueName: \"kubernetes.io/projected/c768c562-0c15-4f8e-83e0-14261a061341-kube-api-access-jftrt\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:39.808061 master-0 kubenswrapper[7146]: I0318 13:08:39.808020 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp5xj\" (UniqueName: \"kubernetes.io/projected/234a5a6c-3790-49d0-b1e7-86f81048d96a-kube-api-access-pp5xj\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.829737 master-0 kubenswrapper[7146]: I0318 13:08:39.829679 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-756d974757-dktsh" Mar 18 13:08:39.830083 master-0 kubenswrapper[7146]: I0318 13:08:39.830061 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-ff8b688b4-t48ff" Mar 18 13:08:39.841204 master-0 kubenswrapper[7146]: I0318 13:08:39.841142 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f"] Mar 18 13:08:39.842798 master-0 kubenswrapper[7146]: I0318 13:08:39.841912 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" Mar 18 13:08:39.852345 master-0 kubenswrapper[7146]: I0318 13:08:39.852300 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f"] Mar 18 13:08:39.900428 master-0 kubenswrapper[7146]: I0318 13:08:39.900371 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:39.918016 master-0 kubenswrapper[7146]: I0318 13:08:39.917954 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47"] Mar 18 13:08:39.918790 master-0 kubenswrapper[7146]: I0318 13:08:39.918757 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:39.926405 master-0 kubenswrapper[7146]: I0318 13:08:39.920701 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 13:08:39.926405 master-0 kubenswrapper[7146]: I0318 13:08:39.923557 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 13:08:39.926405 master-0 kubenswrapper[7146]: I0318 13:08:39.923684 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 13:08:39.926405 master-0 kubenswrapper[7146]: I0318 13:08:39.924482 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 13:08:39.926405 master-0 kubenswrapper[7146]: I0318 13:08:39.925787 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 13:08:39.944686 master-0 kubenswrapper[7146]: I0318 13:08:39.944229 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-756d974757-dktsh"] Mar 18 13:08:39.944686 master-0 kubenswrapper[7146]: I0318 13:08:39.944346 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47"] Mar 18 13:08:39.945966 master-0 kubenswrapper[7146]: I0318 13:08:39.945890 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-756d974757-dktsh"] Mar 18 13:08:39.979044 master-0 kubenswrapper[7146]: I0318 13:08:39.977465 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzldt\" (UniqueName: \"kubernetes.io/projected/1ad93612-ab12-4b30-984f-119e1b924a84-kube-api-access-xzldt\") pod \"csi-snapshot-controller-64854d9cff-wkw7f\" (UID: \"1ad93612-ab12-4b30-984f-119e1b924a84\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" Mar 18 13:08:39.979044 master-0 kubenswrapper[7146]: I0318 13:08:39.977550 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fa67c544-d918-4ccf-a3a9-ffbfafe3c397\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:08:39.984199 master-0 kubenswrapper[7146]: I0318 13:08:39.984152 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fa67c544-d918-4ccf-a3a9-ffbfafe3c397\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:08:39.987964 master-0 kubenswrapper[7146]: I0318 13:08:39.987864 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-ff8b688b4-t48ff"] Mar 18 13:08:39.990740 master-0 kubenswrapper[7146]: I0318 13:08:39.990702 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-ff8b688b4-t48ff"] Mar 18 13:08:40.046528 master-0 kubenswrapper[7146]: I0318 13:08:40.046411 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:08:40.088963 master-0 kubenswrapper[7146]: I0318 13:08:40.078627 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce73daa8-f853-4bcf-b70c-f352917c589e-config\") pod \"route-controller-manager-5cc4dcd8b-d7b47\" (UID: \"ce73daa8-f853-4bcf-b70c-f352917c589e\") " pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:40.088963 master-0 kubenswrapper[7146]: I0318 13:08:40.078755 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce73daa8-f853-4bcf-b70c-f352917c589e-serving-cert\") pod \"route-controller-manager-5cc4dcd8b-d7b47\" (UID: \"ce73daa8-f853-4bcf-b70c-f352917c589e\") " pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:40.088963 master-0 kubenswrapper[7146]: I0318 13:08:40.078808 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qn58\" (UniqueName: \"kubernetes.io/projected/ce73daa8-f853-4bcf-b70c-f352917c589e-kube-api-access-2qn58\") pod \"route-controller-manager-5cc4dcd8b-d7b47\" (UID: \"ce73daa8-f853-4bcf-b70c-f352917c589e\") " pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:40.088963 master-0 kubenswrapper[7146]: I0318 13:08:40.079020 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzldt\" (UniqueName: \"kubernetes.io/projected/1ad93612-ab12-4b30-984f-119e1b924a84-kube-api-access-xzldt\") pod \"csi-snapshot-controller-64854d9cff-wkw7f\" (UID: \"1ad93612-ab12-4b30-984f-119e1b924a84\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" Mar 18 13:08:40.088963 master-0 kubenswrapper[7146]: I0318 13:08:40.079069 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce73daa8-f853-4bcf-b70c-f352917c589e-client-ca\") pod \"route-controller-manager-5cc4dcd8b-d7b47\" (UID: \"ce73daa8-f853-4bcf-b70c-f352917c589e\") " pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:40.088963 master-0 kubenswrapper[7146]: I0318 13:08:40.079134 7146 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f72ae13-ec95-41c8-8d27-b83b69db104b-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:40.088963 master-0 kubenswrapper[7146]: I0318 13:08:40.079148 7146 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f72ae13-ec95-41c8-8d27-b83b69db104b-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:40.088963 master-0 kubenswrapper[7146]: I0318 13:08:40.081670 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:08:40.099505 master-0 kubenswrapper[7146]: I0318 13:08:40.099105 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzldt\" (UniqueName: \"kubernetes.io/projected/1ad93612-ab12-4b30-984f-119e1b924a84-kube-api-access-xzldt\") pod \"csi-snapshot-controller-64854d9cff-wkw7f\" (UID: \"1ad93612-ab12-4b30-984f-119e1b924a84\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" Mar 18 13:08:40.180061 master-0 kubenswrapper[7146]: I0318 13:08:40.179915 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce73daa8-f853-4bcf-b70c-f352917c589e-config\") pod \"route-controller-manager-5cc4dcd8b-d7b47\" (UID: \"ce73daa8-f853-4bcf-b70c-f352917c589e\") " pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:40.181297 master-0 kubenswrapper[7146]: I0318 13:08:40.180427 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce73daa8-f853-4bcf-b70c-f352917c589e-serving-cert\") pod \"route-controller-manager-5cc4dcd8b-d7b47\" (UID: \"ce73daa8-f853-4bcf-b70c-f352917c589e\") " pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:40.181297 master-0 kubenswrapper[7146]: I0318 13:08:40.180559 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qn58\" (UniqueName: \"kubernetes.io/projected/ce73daa8-f853-4bcf-b70c-f352917c589e-kube-api-access-2qn58\") pod \"route-controller-manager-5cc4dcd8b-d7b47\" (UID: \"ce73daa8-f853-4bcf-b70c-f352917c589e\") " pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:40.181297 master-0 kubenswrapper[7146]: I0318 13:08:40.181118 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce73daa8-f853-4bcf-b70c-f352917c589e-config\") pod \"route-controller-manager-5cc4dcd8b-d7b47\" (UID: \"ce73daa8-f853-4bcf-b70c-f352917c589e\") " pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:40.185190 master-0 kubenswrapper[7146]: I0318 13:08:40.183543 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce73daa8-f853-4bcf-b70c-f352917c589e-client-ca\") pod \"route-controller-manager-5cc4dcd8b-d7b47\" (UID: \"ce73daa8-f853-4bcf-b70c-f352917c589e\") " pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:40.185190 master-0 kubenswrapper[7146]: I0318 13:08:40.184484 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce73daa8-f853-4bcf-b70c-f352917c589e-client-ca\") pod \"route-controller-manager-5cc4dcd8b-d7b47\" (UID: \"ce73daa8-f853-4bcf-b70c-f352917c589e\") " pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:40.188095 master-0 kubenswrapper[7146]: I0318 13:08:40.187531 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce73daa8-f853-4bcf-b70c-f352917c589e-serving-cert\") pod \"route-controller-manager-5cc4dcd8b-d7b47\" (UID: \"ce73daa8-f853-4bcf-b70c-f352917c589e\") " pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:40.188095 master-0 kubenswrapper[7146]: I0318 13:08:40.187712 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" Mar 18 13:08:40.199916 master-0 kubenswrapper[7146]: I0318 13:08:40.199883 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qn58\" (UniqueName: \"kubernetes.io/projected/ce73daa8-f853-4bcf-b70c-f352917c589e-kube-api-access-2qn58\") pod \"route-controller-manager-5cc4dcd8b-d7b47\" (UID: \"ce73daa8-f853-4bcf-b70c-f352917c589e\") " pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:40.267035 master-0 kubenswrapper[7146]: I0318 13:08:40.266036 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:40.280168 master-0 kubenswrapper[7146]: I0318 13:08:40.279523 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z"] Mar 18 13:08:40.281424 master-0 kubenswrapper[7146]: I0318 13:08:40.280610 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:40.282544 master-0 kubenswrapper[7146]: I0318 13:08:40.282523 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 13:08:40.283098 master-0 kubenswrapper[7146]: I0318 13:08:40.283037 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 13:08:40.283262 master-0 kubenswrapper[7146]: I0318 13:08:40.283247 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 13:08:40.286089 master-0 kubenswrapper[7146]: I0318 13:08:40.285984 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-audit\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:40.286183 master-0 kubenswrapper[7146]: E0318 13:08:40.286143 7146 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 13:08:40.286225 master-0 kubenswrapper[7146]: E0318 13:08:40.286200 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-audit podName:c768c562-0c15-4f8e-83e0-14261a061341 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:41.28618306 +0000 UTC m=+30.094400421 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-audit") pod "apiserver-6774666ccc-2b2qz" (UID: "c768c562-0c15-4f8e-83e0-14261a061341") : configmap "audit-0" not found Mar 18 13:08:40.298052 master-0 kubenswrapper[7146]: I0318 13:08:40.297956 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z"] Mar 18 13:08:40.387967 master-0 kubenswrapper[7146]: I0318 13:08:40.387191 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/baeb6380-95e4-4e10-9798-e1e22f20bade-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:40.387967 master-0 kubenswrapper[7146]: I0318 13:08:40.387395 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/baeb6380-95e4-4e10-9798-e1e22f20bade-cache\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:40.387967 master-0 kubenswrapper[7146]: I0318 13:08:40.387517 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:40.387967 master-0 kubenswrapper[7146]: I0318 13:08:40.387556 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlm4c\" (UniqueName: \"kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-kube-api-access-xlm4c\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:40.387967 master-0 kubenswrapper[7146]: I0318 13:08:40.387597 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/baeb6380-95e4-4e10-9798-e1e22f20bade-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:40.488760 master-0 kubenswrapper[7146]: I0318 13:08:40.488715 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/baeb6380-95e4-4e10-9798-e1e22f20bade-cache\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:40.488970 master-0 kubenswrapper[7146]: I0318 13:08:40.488779 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:40.488970 master-0 kubenswrapper[7146]: I0318 13:08:40.488800 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlm4c\" (UniqueName: \"kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-kube-api-access-xlm4c\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:40.492433 master-0 kubenswrapper[7146]: I0318 13:08:40.492388 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/baeb6380-95e4-4e10-9798-e1e22f20bade-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:40.492524 master-0 kubenswrapper[7146]: I0318 13:08:40.492510 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/baeb6380-95e4-4e10-9798-e1e22f20bade-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:40.493033 master-0 kubenswrapper[7146]: I0318 13:08:40.492743 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/baeb6380-95e4-4e10-9798-e1e22f20bade-cache\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:40.493099 master-0 kubenswrapper[7146]: E0318 13:08:40.493068 7146 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: configmap "operator-controller-trusted-ca-bundle" not found Mar 18 13:08:40.493131 master-0 kubenswrapper[7146]: E0318 13:08:40.493103 7146 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z: configmap "operator-controller-trusted-ca-bundle" not found Mar 18 13:08:40.493163 master-0 kubenswrapper[7146]: E0318 13:08:40.493151 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-ca-certs podName:baeb6380-95e4-4e10-9798-e1e22f20bade nodeName:}" failed. No retries permitted until 2026-03-18 13:08:40.993137071 +0000 UTC m=+29.801354432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-ca-certs") pod "operator-controller-controller-manager-57777556ff-4r95z" (UID: "baeb6380-95e4-4e10-9798-e1e22f20bade") : configmap "operator-controller-trusted-ca-bundle" not found Mar 18 13:08:40.495004 master-0 kubenswrapper[7146]: I0318 13:08:40.493362 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/baeb6380-95e4-4e10-9798-e1e22f20bade-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:40.495004 master-0 kubenswrapper[7146]: I0318 13:08:40.493404 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/baeb6380-95e4-4e10-9798-e1e22f20bade-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:40.511346 master-0 kubenswrapper[7146]: I0318 13:08:40.511252 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlm4c\" (UniqueName: \"kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-kube-api-access-xlm4c\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:41.008187 master-0 kubenswrapper[7146]: I0318 13:08:41.008137 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:41.008427 master-0 kubenswrapper[7146]: E0318 13:08:41.008286 7146 projected.go:301] Couldn't get configMap payload openshift-operator-controller/operator-controller-trusted-ca-bundle: configmap references non-existent config key: ca-bundle.crt Mar 18 13:08:41.008427 master-0 kubenswrapper[7146]: E0318 13:08:41.008313 7146 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z: configmap references non-existent config key: ca-bundle.crt Mar 18 13:08:41.008515 master-0 kubenswrapper[7146]: E0318 13:08:41.008481 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-ca-certs podName:baeb6380-95e4-4e10-9798-e1e22f20bade nodeName:}" failed. No retries permitted until 2026-03-18 13:08:42.008464213 +0000 UTC m=+30.816681574 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-ca-certs") pod "operator-controller-controller-manager-57777556ff-4r95z" (UID: "baeb6380-95e4-4e10-9798-e1e22f20bade") : configmap references non-existent config key: ca-bundle.crt Mar 18 13:08:41.313679 master-0 kubenswrapper[7146]: I0318 13:08:41.313557 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-audit\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:41.314266 master-0 kubenswrapper[7146]: E0318 13:08:41.313683 7146 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 13:08:41.314266 master-0 kubenswrapper[7146]: E0318 13:08:41.313738 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-audit podName:c768c562-0c15-4f8e-83e0-14261a061341 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:43.313721726 +0000 UTC m=+32.121939087 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-audit") pod "apiserver-6774666ccc-2b2qz" (UID: "c768c562-0c15-4f8e-83e0-14261a061341") : configmap "audit-0" not found Mar 18 13:08:41.364256 master-0 kubenswrapper[7146]: I0318 13:08:41.363914 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f72ae13-ec95-41c8-8d27-b83b69db104b" path="/var/lib/kubelet/pods/3f72ae13-ec95-41c8-8d27-b83b69db104b/volumes" Mar 18 13:08:41.364594 master-0 kubenswrapper[7146]: I0318 13:08:41.364566 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de71e92-9da0-44f7-8d3e-13c4564a6979" path="/var/lib/kubelet/pods/4de71e92-9da0-44f7-8d3e-13c4564a6979/volumes" Mar 18 13:08:41.708234 master-0 kubenswrapper[7146]: W0318 13:08:41.707904 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a1019b1_2b2d_4d63_bd2b_8c45bb85c90a.slice/crio-b4561154d0b6ba0fd61becf7cb0b78f50d8ad270a32afdea4927372423c86f1f WatchSource:0}: Error finding container b4561154d0b6ba0fd61becf7cb0b78f50d8ad270a32afdea4927372423c86f1f: Status 404 returned error can't find the container with id b4561154d0b6ba0fd61becf7cb0b78f50d8ad270a32afdea4927372423c86f1f Mar 18 13:08:41.836950 master-0 kubenswrapper[7146]: I0318 13:08:41.836881 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" event={"ID":"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a","Type":"ContainerStarted","Data":"b4561154d0b6ba0fd61becf7cb0b78f50d8ad270a32afdea4927372423c86f1f"} Mar 18 13:08:42.028800 master-0 kubenswrapper[7146]: I0318 13:08:42.028691 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:42.033236 master-0 kubenswrapper[7146]: I0318 13:08:42.033179 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:42.102286 master-0 kubenswrapper[7146]: I0318 13:08:42.101809 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:42.121299 master-0 kubenswrapper[7146]: I0318 13:08:42.121260 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:08:42.528916 master-0 kubenswrapper[7146]: I0318 13:08:42.528859 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:42.529654 master-0 kubenswrapper[7146]: I0318 13:08:42.529099 7146 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:08:42.547465 master-0 kubenswrapper[7146]: I0318 13:08:42.547417 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:08:42.741025 master-0 kubenswrapper[7146]: I0318 13:08:42.740954 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5484d978b-wmp2h"] Mar 18 13:08:42.741531 master-0 kubenswrapper[7146]: I0318 13:08:42.741506 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:08:42.747652 master-0 kubenswrapper[7146]: I0318 13:08:42.747302 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:08:42.747652 master-0 kubenswrapper[7146]: I0318 13:08:42.747380 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:08:42.747652 master-0 kubenswrapper[7146]: I0318 13:08:42.747475 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:08:42.747652 master-0 kubenswrapper[7146]: I0318 13:08:42.747601 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:08:42.749091 master-0 kubenswrapper[7146]: I0318 13:08:42.748904 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:08:42.752927 master-0 kubenswrapper[7146]: I0318 13:08:42.752790 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:08:42.765261 master-0 kubenswrapper[7146]: I0318 13:08:42.765214 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5484d978b-wmp2h"] Mar 18 13:08:42.840542 master-0 kubenswrapper[7146]: I0318 13:08:42.840453 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-client-ca\") pod \"controller-manager-5484d978b-wmp2h\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:08:42.840542 master-0 kubenswrapper[7146]: I0318 13:08:42.840501 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d6w6\" (UniqueName: \"kubernetes.io/projected/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-kube-api-access-6d6w6\") pod \"controller-manager-5484d978b-wmp2h\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:08:42.840899 master-0 kubenswrapper[7146]: I0318 13:08:42.840568 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-proxy-ca-bundles\") pod \"controller-manager-5484d978b-wmp2h\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:08:42.840899 master-0 kubenswrapper[7146]: I0318 13:08:42.840675 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-config\") pod \"controller-manager-5484d978b-wmp2h\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:08:42.840899 master-0 kubenswrapper[7146]: I0318 13:08:42.840750 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-serving-cert\") pod \"controller-manager-5484d978b-wmp2h\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:08:42.942487 master-0 kubenswrapper[7146]: I0318 13:08:42.941978 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-client-ca\") pod \"controller-manager-5484d978b-wmp2h\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:08:42.942487 master-0 kubenswrapper[7146]: I0318 13:08:42.942048 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d6w6\" (UniqueName: \"kubernetes.io/projected/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-kube-api-access-6d6w6\") pod \"controller-manager-5484d978b-wmp2h\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:08:42.942487 master-0 kubenswrapper[7146]: I0318 13:08:42.942117 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-proxy-ca-bundles\") pod \"controller-manager-5484d978b-wmp2h\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:08:42.942853 master-0 kubenswrapper[7146]: I0318 13:08:42.942569 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-config\") pod \"controller-manager-5484d978b-wmp2h\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:08:42.942853 master-0 kubenswrapper[7146]: I0318 13:08:42.942675 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-serving-cert\") pod \"controller-manager-5484d978b-wmp2h\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:08:42.944464 master-0 kubenswrapper[7146]: I0318 13:08:42.943521 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-client-ca\") pod \"controller-manager-5484d978b-wmp2h\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:08:42.944464 master-0 kubenswrapper[7146]: I0318 13:08:42.943839 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-config\") pod \"controller-manager-5484d978b-wmp2h\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:08:42.944464 master-0 kubenswrapper[7146]: I0318 13:08:42.944415 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-proxy-ca-bundles\") pod \"controller-manager-5484d978b-wmp2h\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:08:42.946801 master-0 kubenswrapper[7146]: I0318 13:08:42.946757 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-serving-cert\") pod \"controller-manager-5484d978b-wmp2h\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:08:42.957989 master-0 kubenswrapper[7146]: I0318 13:08:42.957918 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d6w6\" (UniqueName: \"kubernetes.io/projected/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-kube-api-access-6d6w6\") pod \"controller-manager-5484d978b-wmp2h\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:08:43.049530 master-0 kubenswrapper[7146]: I0318 13:08:43.049491 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-6774666ccc-2b2qz"] Mar 18 13:08:43.049995 master-0 kubenswrapper[7146]: E0318 13:08:43.049973 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" podUID="c768c562-0c15-4f8e-83e0-14261a061341" Mar 18 13:08:43.106741 master-0 kubenswrapper[7146]: I0318 13:08:43.106630 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:08:43.349794 master-0 kubenswrapper[7146]: I0318 13:08:43.349636 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-audit\") pod \"apiserver-6774666ccc-2b2qz\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:43.350158 master-0 kubenswrapper[7146]: E0318 13:08:43.349820 7146 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 13:08:43.350260 master-0 kubenswrapper[7146]: E0318 13:08:43.350204 7146 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-audit podName:c768c562-0c15-4f8e-83e0-14261a061341 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:47.350185507 +0000 UTC m=+36.158402868 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-audit") pod "apiserver-6774666ccc-2b2qz" (UID: "c768c562-0c15-4f8e-83e0-14261a061341") : configmap "audit-0" not found Mar 18 13:08:43.846024 master-0 kubenswrapper[7146]: I0318 13:08:43.845983 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:43.857101 master-0 kubenswrapper[7146]: I0318 13:08:43.857030 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:43.956847 master-0 kubenswrapper[7146]: I0318 13:08:43.956790 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-trusted-ca-bundle\") pod \"c768c562-0c15-4f8e-83e0-14261a061341\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " Mar 18 13:08:43.956847 master-0 kubenswrapper[7146]: I0318 13:08:43.956840 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-etcd-serving-ca\") pod \"c768c562-0c15-4f8e-83e0-14261a061341\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " Mar 18 13:08:43.956847 master-0 kubenswrapper[7146]: I0318 13:08:43.956868 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jftrt\" (UniqueName: \"kubernetes.io/projected/c768c562-0c15-4f8e-83e0-14261a061341-kube-api-access-jftrt\") pod \"c768c562-0c15-4f8e-83e0-14261a061341\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " Mar 18 13:08:43.957150 master-0 kubenswrapper[7146]: I0318 13:08:43.956907 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-config\") pod \"c768c562-0c15-4f8e-83e0-14261a061341\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " Mar 18 13:08:43.957150 master-0 kubenswrapper[7146]: I0318 13:08:43.956925 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-serving-cert\") pod \"c768c562-0c15-4f8e-83e0-14261a061341\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " Mar 18 13:08:43.957209 master-0 kubenswrapper[7146]: I0318 13:08:43.957190 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c768c562-0c15-4f8e-83e0-14261a061341-node-pullsecrets\") pod \"c768c562-0c15-4f8e-83e0-14261a061341\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " Mar 18 13:08:43.957273 master-0 kubenswrapper[7146]: I0318 13:08:43.957237 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-encryption-config\") pod \"c768c562-0c15-4f8e-83e0-14261a061341\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " Mar 18 13:08:43.957310 master-0 kubenswrapper[7146]: I0318 13:08:43.957282 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c768c562-0c15-4f8e-83e0-14261a061341-audit-dir\") pod \"c768c562-0c15-4f8e-83e0-14261a061341\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " Mar 18 13:08:43.957344 master-0 kubenswrapper[7146]: I0318 13:08:43.957329 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-etcd-client\") pod \"c768c562-0c15-4f8e-83e0-14261a061341\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " Mar 18 13:08:43.957375 master-0 kubenswrapper[7146]: I0318 13:08:43.957364 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-image-import-ca\") pod \"c768c562-0c15-4f8e-83e0-14261a061341\" (UID: \"c768c562-0c15-4f8e-83e0-14261a061341\") " Mar 18 13:08:43.957407 master-0 kubenswrapper[7146]: I0318 13:08:43.957270 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c768c562-0c15-4f8e-83e0-14261a061341-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c768c562-0c15-4f8e-83e0-14261a061341" (UID: "c768c562-0c15-4f8e-83e0-14261a061341"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:43.957407 master-0 kubenswrapper[7146]: I0318 13:08:43.957371 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c768c562-0c15-4f8e-83e0-14261a061341" (UID: "c768c562-0c15-4f8e-83e0-14261a061341"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:08:43.957407 master-0 kubenswrapper[7146]: I0318 13:08:43.957396 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "c768c562-0c15-4f8e-83e0-14261a061341" (UID: "c768c562-0c15-4f8e-83e0-14261a061341"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:08:43.957491 master-0 kubenswrapper[7146]: I0318 13:08:43.957305 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c768c562-0c15-4f8e-83e0-14261a061341-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c768c562-0c15-4f8e-83e0-14261a061341" (UID: "c768c562-0c15-4f8e-83e0-14261a061341"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:43.957645 master-0 kubenswrapper[7146]: I0318 13:08:43.957622 7146 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c768c562-0c15-4f8e-83e0-14261a061341-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:43.957699 master-0 kubenswrapper[7146]: I0318 13:08:43.957647 7146 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c768c562-0c15-4f8e-83e0-14261a061341-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:43.957699 master-0 kubenswrapper[7146]: I0318 13:08:43.957661 7146 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:43.957699 master-0 kubenswrapper[7146]: I0318 13:08:43.957675 7146 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:43.957918 master-0 kubenswrapper[7146]: I0318 13:08:43.957897 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "c768c562-0c15-4f8e-83e0-14261a061341" (UID: "c768c562-0c15-4f8e-83e0-14261a061341"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:08:43.957997 master-0 kubenswrapper[7146]: I0318 13:08:43.957982 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-config" (OuterVolumeSpecName: "config") pod "c768c562-0c15-4f8e-83e0-14261a061341" (UID: "c768c562-0c15-4f8e-83e0-14261a061341"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:08:43.960201 master-0 kubenswrapper[7146]: I0318 13:08:43.960162 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "c768c562-0c15-4f8e-83e0-14261a061341" (UID: "c768c562-0c15-4f8e-83e0-14261a061341"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:08:43.960267 master-0 kubenswrapper[7146]: I0318 13:08:43.960245 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c768c562-0c15-4f8e-83e0-14261a061341-kube-api-access-jftrt" (OuterVolumeSpecName: "kube-api-access-jftrt") pod "c768c562-0c15-4f8e-83e0-14261a061341" (UID: "c768c562-0c15-4f8e-83e0-14261a061341"). InnerVolumeSpecName "kube-api-access-jftrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:08:43.960868 master-0 kubenswrapper[7146]: I0318 13:08:43.960840 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "c768c562-0c15-4f8e-83e0-14261a061341" (UID: "c768c562-0c15-4f8e-83e0-14261a061341"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:08:43.970101 master-0 kubenswrapper[7146]: I0318 13:08:43.970002 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 13:08:43.978211 master-0 kubenswrapper[7146]: I0318 13:08:43.978138 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c768c562-0c15-4f8e-83e0-14261a061341" (UID: "c768c562-0c15-4f8e-83e0-14261a061341"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:08:44.058432 master-0 kubenswrapper[7146]: I0318 13:08:44.058305 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jftrt\" (UniqueName: \"kubernetes.io/projected/c768c562-0c15-4f8e-83e0-14261a061341-kube-api-access-jftrt\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:44.058432 master-0 kubenswrapper[7146]: I0318 13:08:44.058340 7146 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:44.058432 master-0 kubenswrapper[7146]: I0318 13:08:44.058353 7146 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:44.058432 master-0 kubenswrapper[7146]: I0318 13:08:44.058366 7146 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:44.058432 master-0 kubenswrapper[7146]: I0318 13:08:44.058382 7146 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c768c562-0c15-4f8e-83e0-14261a061341-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:44.058432 master-0 kubenswrapper[7146]: I0318 13:08:44.058394 7146 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:44.260478 master-0 kubenswrapper[7146]: I0318 13:08:44.259541 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:08:44.260478 master-0 kubenswrapper[7146]: I0318 13:08:44.259631 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:08:44.260478 master-0 kubenswrapper[7146]: I0318 13:08:44.259660 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:08:44.260478 master-0 kubenswrapper[7146]: I0318 13:08:44.259717 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:08:44.260478 master-0 kubenswrapper[7146]: I0318 13:08:44.259760 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:08:44.260478 master-0 kubenswrapper[7146]: I0318 13:08:44.259800 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:08:44.268518 master-0 kubenswrapper[7146]: I0318 13:08:44.265845 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:08:44.268518 master-0 kubenswrapper[7146]: I0318 13:08:44.267561 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:08:44.269544 master-0 kubenswrapper[7146]: I0318 13:08:44.269304 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:08:44.269544 master-0 kubenswrapper[7146]: I0318 13:08:44.269451 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:08:44.272873 master-0 kubenswrapper[7146]: I0318 13:08:44.270265 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:08:44.272873 master-0 kubenswrapper[7146]: I0318 13:08:44.271462 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:08:44.361428 master-0 kubenswrapper[7146]: I0318 13:08:44.361366 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:08:44.365976 master-0 kubenswrapper[7146]: I0318 13:08:44.365906 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zvsmb\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:08:44.457985 master-0 kubenswrapper[7146]: I0318 13:08:44.457880 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:08:44.458185 master-0 kubenswrapper[7146]: I0318 13:08:44.457921 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:08:44.458420 master-0 kubenswrapper[7146]: I0318 13:08:44.458382 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:08:44.458559 master-0 kubenswrapper[7146]: I0318 13:08:44.458528 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:08:44.459437 master-0 kubenswrapper[7146]: I0318 13:08:44.459166 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:08:44.465593 master-0 kubenswrapper[7146]: I0318 13:08:44.465554 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:08:44.465674 master-0 kubenswrapper[7146]: I0318 13:08:44.465625 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:08:44.852710 master-0 kubenswrapper[7146]: I0318 13:08:44.852666 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6774666ccc-2b2qz" Mar 18 13:08:44.999306 master-0 kubenswrapper[7146]: I0318 13:08:44.999070 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-6774666ccc-2b2qz"] Mar 18 13:08:45.010529 master-0 kubenswrapper[7146]: I0318 13:08:45.010471 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-574f6d5bf6-8krhk"] Mar 18 13:08:45.011468 master-0 kubenswrapper[7146]: I0318 13:08:45.011444 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.012512 master-0 kubenswrapper[7146]: I0318 13:08:45.012466 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-6774666ccc-2b2qz"] Mar 18 13:08:45.015335 master-0 kubenswrapper[7146]: I0318 13:08:45.015294 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 13:08:45.016217 master-0 kubenswrapper[7146]: I0318 13:08:45.016181 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 13:08:45.017006 master-0 kubenswrapper[7146]: I0318 13:08:45.016982 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 13:08:45.024019 master-0 kubenswrapper[7146]: I0318 13:08:45.023117 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-574f6d5bf6-8krhk"] Mar 18 13:08:45.024019 master-0 kubenswrapper[7146]: I0318 13:08:45.023243 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 13:08:45.024019 master-0 kubenswrapper[7146]: I0318 13:08:45.023337 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 13:08:45.024019 master-0 kubenswrapper[7146]: I0318 13:08:45.023377 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 13:08:45.024019 master-0 kubenswrapper[7146]: I0318 13:08:45.023442 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 13:08:45.024019 master-0 kubenswrapper[7146]: I0318 13:08:45.023550 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 13:08:45.024019 master-0 kubenswrapper[7146]: I0318 13:08:45.023597 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 13:08:45.028016 master-0 kubenswrapper[7146]: I0318 13:08:45.027700 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 13:08:45.176256 master-0 kubenswrapper[7146]: I0318 13:08:45.176208 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-etcd-client\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.176360 master-0 kubenswrapper[7146]: I0318 13:08:45.176258 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-etcd-serving-ca\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.176360 master-0 kubenswrapper[7146]: I0318 13:08:45.176282 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-serving-cert\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.176360 master-0 kubenswrapper[7146]: I0318 13:08:45.176327 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-trusted-ca-bundle\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.176360 master-0 kubenswrapper[7146]: I0318 13:08:45.176355 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b41c9132-92ef-429d-bdd5-9bdb024e04fc-audit-dir\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.176470 master-0 kubenswrapper[7146]: I0318 13:08:45.176384 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlbm6\" (UniqueName: \"kubernetes.io/projected/b41c9132-92ef-429d-bdd5-9bdb024e04fc-kube-api-access-wlbm6\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.176470 master-0 kubenswrapper[7146]: I0318 13:08:45.176450 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-encryption-config\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.176528 master-0 kubenswrapper[7146]: I0318 13:08:45.176506 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-image-import-ca\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.176561 master-0 kubenswrapper[7146]: I0318 13:08:45.176530 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b41c9132-92ef-429d-bdd5-9bdb024e04fc-node-pullsecrets\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.176561 master-0 kubenswrapper[7146]: I0318 13:08:45.176555 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-config\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.176622 master-0 kubenswrapper[7146]: I0318 13:08:45.176576 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-audit\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.176687 master-0 kubenswrapper[7146]: I0318 13:08:45.176621 7146 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c768c562-0c15-4f8e-83e0-14261a061341-audit\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:45.278109 master-0 kubenswrapper[7146]: I0318 13:08:45.277212 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-etcd-client\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.278109 master-0 kubenswrapper[7146]: I0318 13:08:45.277471 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-etcd-serving-ca\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.278109 master-0 kubenswrapper[7146]: I0318 13:08:45.277492 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-serving-cert\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.278109 master-0 kubenswrapper[7146]: I0318 13:08:45.277653 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-trusted-ca-bundle\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.278109 master-0 kubenswrapper[7146]: I0318 13:08:45.277725 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b41c9132-92ef-429d-bdd5-9bdb024e04fc-audit-dir\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.278109 master-0 kubenswrapper[7146]: I0318 13:08:45.277761 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlbm6\" (UniqueName: \"kubernetes.io/projected/b41c9132-92ef-429d-bdd5-9bdb024e04fc-kube-api-access-wlbm6\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.278109 master-0 kubenswrapper[7146]: I0318 13:08:45.277790 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-encryption-config\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.278109 master-0 kubenswrapper[7146]: I0318 13:08:45.277867 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-image-import-ca\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.278109 master-0 kubenswrapper[7146]: I0318 13:08:45.277913 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b41c9132-92ef-429d-bdd5-9bdb024e04fc-node-pullsecrets\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.278109 master-0 kubenswrapper[7146]: I0318 13:08:45.277982 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-config\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.278109 master-0 kubenswrapper[7146]: I0318 13:08:45.278021 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-audit\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.278530 master-0 kubenswrapper[7146]: I0318 13:08:45.278321 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-etcd-serving-ca\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.278561 master-0 kubenswrapper[7146]: I0318 13:08:45.278531 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-trusted-ca-bundle\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.278638 master-0 kubenswrapper[7146]: I0318 13:08:45.278595 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b41c9132-92ef-429d-bdd5-9bdb024e04fc-node-pullsecrets\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.278711 master-0 kubenswrapper[7146]: I0318 13:08:45.278690 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-audit\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.278972 master-0 kubenswrapper[7146]: I0318 13:08:45.278928 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-image-import-ca\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.279029 master-0 kubenswrapper[7146]: I0318 13:08:45.278983 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b41c9132-92ef-429d-bdd5-9bdb024e04fc-audit-dir\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.279301 master-0 kubenswrapper[7146]: I0318 13:08:45.279274 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-config\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.281557 master-0 kubenswrapper[7146]: I0318 13:08:45.281336 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-serving-cert\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.281557 master-0 kubenswrapper[7146]: I0318 13:08:45.281508 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-etcd-client\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.286802 master-0 kubenswrapper[7146]: I0318 13:08:45.286754 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-encryption-config\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.318079 master-0 kubenswrapper[7146]: I0318 13:08:45.317980 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47"] Mar 18 13:08:45.326800 master-0 kubenswrapper[7146]: I0318 13:08:45.326748 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlbm6\" (UniqueName: \"kubernetes.io/projected/b41c9132-92ef-429d-bdd5-9bdb024e04fc-kube-api-access-wlbm6\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.363919 master-0 kubenswrapper[7146]: I0318 13:08:45.363829 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c768c562-0c15-4f8e-83e0-14261a061341" path="/var/lib/kubelet/pods/c768c562-0c15-4f8e-83e0-14261a061341/volumes" Mar 18 13:08:45.366991 master-0 kubenswrapper[7146]: I0318 13:08:45.366893 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:08:45.963047 master-0 kubenswrapper[7146]: W0318 13:08:45.962902 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce73daa8_f853_4bcf_b70c_f352917c589e.slice/crio-fc01b02f1e83431c1bb5f60d025650155b7f5d920d4dde29ad7bb0144c01f615 WatchSource:0}: Error finding container fc01b02f1e83431c1bb5f60d025650155b7f5d920d4dde29ad7bb0144c01f615: Status 404 returned error can't find the container with id fc01b02f1e83431c1bb5f60d025650155b7f5d920d4dde29ad7bb0144c01f615 Mar 18 13:08:46.395191 master-0 kubenswrapper[7146]: I0318 13:08:46.394582 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 13:08:46.395337 master-0 kubenswrapper[7146]: I0318 13:08:46.395315 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:08:46.405247 master-0 kubenswrapper[7146]: I0318 13:08:46.405211 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 13:08:46.406509 master-0 kubenswrapper[7146]: I0318 13:08:46.406470 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/91275a95-9707-4910-883a-f8e8e32bdd27-var-lock\") pod \"installer-2-master-0\" (UID: \"91275a95-9707-4910-883a-f8e8e32bdd27\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:08:46.406608 master-0 kubenswrapper[7146]: I0318 13:08:46.406558 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91275a95-9707-4910-883a-f8e8e32bdd27-kube-api-access\") pod \"installer-2-master-0\" (UID: \"91275a95-9707-4910-883a-f8e8e32bdd27\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:08:46.406608 master-0 kubenswrapper[7146]: I0318 13:08:46.406592 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91275a95-9707-4910-883a-f8e8e32bdd27-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"91275a95-9707-4910-883a-f8e8e32bdd27\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:08:46.510092 master-0 kubenswrapper[7146]: I0318 13:08:46.508067 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91275a95-9707-4910-883a-f8e8e32bdd27-kube-api-access\") pod \"installer-2-master-0\" (UID: \"91275a95-9707-4910-883a-f8e8e32bdd27\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:08:46.510092 master-0 kubenswrapper[7146]: I0318 13:08:46.508110 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91275a95-9707-4910-883a-f8e8e32bdd27-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"91275a95-9707-4910-883a-f8e8e32bdd27\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:08:46.510092 master-0 kubenswrapper[7146]: I0318 13:08:46.508155 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/91275a95-9707-4910-883a-f8e8e32bdd27-var-lock\") pod \"installer-2-master-0\" (UID: \"91275a95-9707-4910-883a-f8e8e32bdd27\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:08:46.510092 master-0 kubenswrapper[7146]: I0318 13:08:46.508245 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/91275a95-9707-4910-883a-f8e8e32bdd27-var-lock\") pod \"installer-2-master-0\" (UID: \"91275a95-9707-4910-883a-f8e8e32bdd27\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:08:46.510092 master-0 kubenswrapper[7146]: I0318 13:08:46.508392 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91275a95-9707-4910-883a-f8e8e32bdd27-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"91275a95-9707-4910-883a-f8e8e32bdd27\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:08:46.562186 master-0 kubenswrapper[7146]: I0318 13:08:46.562030 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91275a95-9707-4910-883a-f8e8e32bdd27-kube-api-access\") pod \"installer-2-master-0\" (UID: \"91275a95-9707-4910-883a-f8e8e32bdd27\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:08:46.616191 master-0 kubenswrapper[7146]: I0318 13:08:46.616145 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:08:46.659054 master-0 kubenswrapper[7146]: I0318 13:08:46.657878 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5"] Mar 18 13:08:46.659404 master-0 kubenswrapper[7146]: I0318 13:08:46.659386 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz"] Mar 18 13:08:46.694675 master-0 kubenswrapper[7146]: I0318 13:08:46.693593 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 13:08:46.876068 master-0 kubenswrapper[7146]: I0318 13:08:46.873779 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-vf6mv" event={"ID":"9ca94153-9d1a-4b0a-a3eb-556e85f2e875","Type":"ContainerStarted","Data":"29d7a546b6979d04e9442a597393c98d98517004963d98b6772585b415f589e1"} Mar 18 13:08:46.876068 master-0 kubenswrapper[7146]: I0318 13:08:46.873836 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-vf6mv" event={"ID":"9ca94153-9d1a-4b0a-a3eb-556e85f2e875","Type":"ContainerStarted","Data":"f24e69dbc27731bf884dba79001642d0b14136397f7f52784d15312052ad1fe0"} Mar 18 13:08:46.878209 master-0 kubenswrapper[7146]: I0318 13:08:46.878178 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z"] Mar 18 13:08:46.878326 master-0 kubenswrapper[7146]: I0318 13:08:46.878315 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-574f6d5bf6-8krhk"] Mar 18 13:08:46.885745 master-0 kubenswrapper[7146]: I0318 13:08:46.885596 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56"] Mar 18 13:08:46.902134 master-0 kubenswrapper[7146]: I0318 13:08:46.895654 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" event={"ID":"ee1eb80b-5a76-443f-a534-54d5bdc0c98a","Type":"ContainerStarted","Data":"1c625ab74e01dd5316e14886f1962977aaeec6d850dd1b7dad1e5cfa9c9c4cad"} Mar 18 13:08:46.902134 master-0 kubenswrapper[7146]: I0318 13:08:46.901742 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f"] Mar 18 13:08:46.905613 master-0 kubenswrapper[7146]: I0318 13:08:46.905557 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-4v84b"] Mar 18 13:08:46.907923 master-0 kubenswrapper[7146]: W0318 13:08:46.906319 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb41c9132_92ef_429d_bdd5_9bdb024e04fc.slice/crio-fe76db3e18ee08aeb5e379f2dbbf7788ff4131f5c2267fbb53a962d2c960a57b WatchSource:0}: Error finding container fe76db3e18ee08aeb5e379f2dbbf7788ff4131f5c2267fbb53a962d2c960a57b: Status 404 returned error can't find the container with id fe76db3e18ee08aeb5e379f2dbbf7788ff4131f5c2267fbb53a962d2c960a57b Mar 18 13:08:46.910442 master-0 kubenswrapper[7146]: I0318 13:08:46.910136 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr"] Mar 18 13:08:46.913339 master-0 kubenswrapper[7146]: I0318 13:08:46.913283 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-8487694857-vf6mv" podStartSLOduration=2.109163337 podStartE2EDuration="18.913268562s" podCreationTimestamp="2026-03-18 13:08:28 +0000 UTC" firstStartedPulling="2026-03-18 13:08:29.192264716 +0000 UTC m=+18.000482077" lastFinishedPulling="2026-03-18 13:08:45.996369931 +0000 UTC m=+34.804587302" observedRunningTime="2026-03-18 13:08:46.912896491 +0000 UTC m=+35.721113862" watchObservedRunningTime="2026-03-18 13:08:46.913268562 +0000 UTC m=+35.721485923" Mar 18 13:08:46.930164 master-0 kubenswrapper[7146]: I0318 13:08:46.930076 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb"] Mar 18 13:08:46.935570 master-0 kubenswrapper[7146]: I0318 13:08:46.934925 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" event={"ID":"a01c92f5-7938-437d-8262-11598bd8023c","Type":"ContainerStarted","Data":"2126919a500ba9422805b0bb64574632eee7b7c0c00f0059624c36ed5c889fe7"} Mar 18 13:08:46.935570 master-0 kubenswrapper[7146]: I0318 13:08:46.934978 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" event={"ID":"a01c92f5-7938-437d-8262-11598bd8023c","Type":"ContainerStarted","Data":"3d9515777e1454e99e50b07c4bb4005cbf649f4fb0161a941555e68ab2bef68b"} Mar 18 13:08:46.941351 master-0 kubenswrapper[7146]: W0318 13:08:46.941305 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47f82c03_65d1_4a6c_ba09_8a00ae778009.slice/crio-6f29c4b1c1fd21881be5b0c8c3cbe035d4334c4ad23b7061f15e1ade0751024e WatchSource:0}: Error finding container 6f29c4b1c1fd21881be5b0c8c3cbe035d4334c4ad23b7061f15e1ade0751024e: Status 404 returned error can't find the container with id 6f29c4b1c1fd21881be5b0c8c3cbe035d4334c4ad23b7061f15e1ade0751024e Mar 18 13:08:46.947423 master-0 kubenswrapper[7146]: I0318 13:08:46.947365 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr"] Mar 18 13:08:46.951812 master-0 kubenswrapper[7146]: I0318 13:08:46.951784 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" event={"ID":"f2b92a53-0b61-4e1d-a306-f9a498e48b38","Type":"ContainerStarted","Data":"bd4c65659cdaf88672c351e368deda39b10476e44f4e0b79ea5e5dab975cb22c"} Mar 18 13:08:46.954707 master-0 kubenswrapper[7146]: I0318 13:08:46.954656 7146 generic.go:334] "Generic (PLEG): container finished" podID="9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a" containerID="753ffebdad8f9e4671d1507f1e261536c6a9a0234c3ae2147357296698c58faf" exitCode=0 Mar 18 13:08:46.954773 master-0 kubenswrapper[7146]: I0318 13:08:46.954719 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" event={"ID":"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a","Type":"ContainerDied","Data":"753ffebdad8f9e4671d1507f1e261536c6a9a0234c3ae2147357296698c58faf"} Mar 18 13:08:46.958628 master-0 kubenswrapper[7146]: I0318 13:08:46.958604 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kq2j4"] Mar 18 13:08:46.959786 master-0 kubenswrapper[7146]: I0318 13:08:46.959667 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" event={"ID":"234a5a6c-3790-49d0-b1e7-86f81048d96a","Type":"ContainerStarted","Data":"d2e64e1e8754957863bad8639f4beaf999396133b2b69117105f95cd95cc7cf9"} Mar 18 13:08:46.963602 master-0 kubenswrapper[7146]: I0318 13:08:46.963576 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5484d978b-wmp2h"] Mar 18 13:08:46.986370 master-0 kubenswrapper[7146]: I0318 13:08:46.964186 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" event={"ID":"da6a763d-2777-40c4-ae1f-c77ced406ea2","Type":"ContainerStarted","Data":"197ca2480c03196f8b16579a12fa19b2b19c1ba277bb9a6c0f3e89221a0d5a9e"} Mar 18 13:08:46.996219 master-0 kubenswrapper[7146]: I0318 13:08:46.993436 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" event={"ID":"ce73daa8-f853-4bcf-b70c-f352917c589e","Type":"ContainerStarted","Data":"fc01b02f1e83431c1bb5f60d025650155b7f5d920d4dde29ad7bb0144c01f615"} Mar 18 13:08:46.996219 master-0 kubenswrapper[7146]: W0318 13:08:46.993854 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod906c0fd3_3bcd_4c6c_8505_b3517bae06b4.slice/crio-70c9337d8980b38a9bfe7fac6f297ccd5982f9e26f0d9055d4cd37b7726d2727 WatchSource:0}: Error finding container 70c9337d8980b38a9bfe7fac6f297ccd5982f9e26f0d9055d4cd37b7726d2727: Status 404 returned error can't find the container with id 70c9337d8980b38a9bfe7fac6f297ccd5982f9e26f0d9055d4cd37b7726d2727 Mar 18 13:08:47.038735 master-0 kubenswrapper[7146]: I0318 13:08:47.024910 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-rlp78"] Mar 18 13:08:47.038735 master-0 kubenswrapper[7146]: I0318 13:08:47.025449 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.046345 master-0 kubenswrapper[7146]: I0318 13:08:47.046295 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" event={"ID":"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4","Type":"ContainerStarted","Data":"ca9e7669e9cbda3d1efa1643b57ac236e8b9cc289164b306448a040fc87f9948"} Mar 18 13:08:47.051930 master-0 kubenswrapper[7146]: I0318 13:08:47.051891 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"fa67c544-d918-4ccf-a3a9-ffbfafe3c397","Type":"ContainerStarted","Data":"c6af28036dfd96fda66b3c1620f4df30b654fca2fe40979dbdc8c5ac43a0865d"} Mar 18 13:08:47.058600 master-0 kubenswrapper[7146]: I0318 13:08:47.054926 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" event={"ID":"369e9689-e2f6-4276-b096-8db094f8d6ae","Type":"ContainerStarted","Data":"a4b53bab35719b1de9b4d4e1f4c3fdf356bb114dd12ac3e84e5af4fe101ae6bf"} Mar 18 13:08:47.072979 master-0 kubenswrapper[7146]: W0318 13:08:47.072928 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff67e258_b085_45a1_bfdf_7a87e2c9fc74.slice/crio-aea171dac2cdb15de31e259a633cbe7c7b58fffdce45f886918ab6cefb487098 WatchSource:0}: Error finding container aea171dac2cdb15de31e259a633cbe7c7b58fffdce45f886918ab6cefb487098: Status 404 returned error can't find the container with id aea171dac2cdb15de31e259a633cbe7c7b58fffdce45f886918ab6cefb487098 Mar 18 13:08:47.074551 master-0 kubenswrapper[7146]: W0318 13:08:47.073157 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e691486_8540_4b79_8eed_b0fb829071db.slice/crio-bc0482eb4be6db452db71a2c46c144f5403bf6de42eee4937dbcaa45ae804557 WatchSource:0}: Error finding container bc0482eb4be6db452db71a2c46c144f5403bf6de42eee4937dbcaa45ae804557: Status 404 returned error can't find the container with id bc0482eb4be6db452db71a2c46c144f5403bf6de42eee4937dbcaa45ae804557 Mar 18 13:08:47.127742 master-0 kubenswrapper[7146]: I0318 13:08:47.127006 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-sys\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.127742 master-0 kubenswrapper[7146]: I0318 13:08:47.127048 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-tuned\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.127742 master-0 kubenswrapper[7146]: I0318 13:08:47.127087 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-systemd\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.127742 master-0 kubenswrapper[7146]: I0318 13:08:47.127112 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-modprobe-d\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.127742 master-0 kubenswrapper[7146]: I0318 13:08:47.127126 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-lib-modules\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.127742 master-0 kubenswrapper[7146]: I0318 13:08:47.127143 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-run\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.127742 master-0 kubenswrapper[7146]: I0318 13:08:47.127158 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-host\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.127742 master-0 kubenswrapper[7146]: I0318 13:08:47.127192 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-kubernetes\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.127742 master-0 kubenswrapper[7146]: I0318 13:08:47.127213 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysctl-conf\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.127742 master-0 kubenswrapper[7146]: I0318 13:08:47.127228 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0f16e797-a619-46a8-948a-9fdfc8a9891f-tmp\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.127742 master-0 kubenswrapper[7146]: I0318 13:08:47.127241 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6b9b\" (UniqueName: \"kubernetes.io/projected/0f16e797-a619-46a8-948a-9fdfc8a9891f-kube-api-access-q6b9b\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.127742 master-0 kubenswrapper[7146]: I0318 13:08:47.127298 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysconfig\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.127742 master-0 kubenswrapper[7146]: I0318 13:08:47.127311 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-var-lib-kubelet\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.127742 master-0 kubenswrapper[7146]: I0318 13:08:47.127344 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysctl-d\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.230397 master-0 kubenswrapper[7146]: I0318 13:08:47.228145 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-sys\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.230397 master-0 kubenswrapper[7146]: I0318 13:08:47.228512 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-tuned\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.230397 master-0 kubenswrapper[7146]: I0318 13:08:47.228544 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-systemd\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.230397 master-0 kubenswrapper[7146]: I0318 13:08:47.228569 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-modprobe-d\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.230397 master-0 kubenswrapper[7146]: I0318 13:08:47.228593 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-lib-modules\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.230397 master-0 kubenswrapper[7146]: I0318 13:08:47.228627 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-run\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.230397 master-0 kubenswrapper[7146]: I0318 13:08:47.228644 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-host\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.230397 master-0 kubenswrapper[7146]: I0318 13:08:47.228685 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-kubernetes\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.230397 master-0 kubenswrapper[7146]: I0318 13:08:47.228714 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysctl-conf\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.230397 master-0 kubenswrapper[7146]: I0318 13:08:47.228736 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6b9b\" (UniqueName: \"kubernetes.io/projected/0f16e797-a619-46a8-948a-9fdfc8a9891f-kube-api-access-q6b9b\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.230397 master-0 kubenswrapper[7146]: I0318 13:08:47.228756 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0f16e797-a619-46a8-948a-9fdfc8a9891f-tmp\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.230397 master-0 kubenswrapper[7146]: I0318 13:08:47.229052 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysconfig\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.230397 master-0 kubenswrapper[7146]: I0318 13:08:47.229077 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-var-lib-kubelet\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.230397 master-0 kubenswrapper[7146]: I0318 13:08:47.229122 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysctl-d\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.238168 master-0 kubenswrapper[7146]: I0318 13:08:47.230542 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-sys\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.238168 master-0 kubenswrapper[7146]: I0318 13:08:47.235392 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysctl-conf\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.238168 master-0 kubenswrapper[7146]: I0318 13:08:47.235438 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-systemd\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.238168 master-0 kubenswrapper[7146]: I0318 13:08:47.235494 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysconfig\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.238168 master-0 kubenswrapper[7146]: I0318 13:08:47.235633 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-modprobe-d\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.238168 master-0 kubenswrapper[7146]: I0318 13:08:47.235704 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-lib-modules\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.238168 master-0 kubenswrapper[7146]: I0318 13:08:47.237284 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-var-lib-kubelet\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.238168 master-0 kubenswrapper[7146]: I0318 13:08:47.237295 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-run\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.238168 master-0 kubenswrapper[7146]: I0318 13:08:47.237338 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-host\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.238168 master-0 kubenswrapper[7146]: I0318 13:08:47.237357 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-kubernetes\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.238168 master-0 kubenswrapper[7146]: I0318 13:08:47.237372 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysctl-d\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.255748 master-0 kubenswrapper[7146]: I0318 13:08:47.255709 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5484d978b-wmp2h"] Mar 18 13:08:47.257505 master-0 kubenswrapper[7146]: I0318 13:08:47.257483 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 13:08:47.259754 master-0 kubenswrapper[7146]: I0318 13:08:47.259714 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-tuned\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.278203 master-0 kubenswrapper[7146]: I0318 13:08:47.276634 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0f16e797-a619-46a8-948a-9fdfc8a9891f-tmp\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.293639 master-0 kubenswrapper[7146]: I0318 13:08:47.292195 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6b9b\" (UniqueName: \"kubernetes.io/projected/0f16e797-a619-46a8-948a-9fdfc8a9891f-kube-api-access-q6b9b\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.319660 master-0 kubenswrapper[7146]: I0318 13:08:47.319177 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47"] Mar 18 13:08:47.363330 master-0 kubenswrapper[7146]: I0318 13:08:47.361440 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:08:47.446251 master-0 kubenswrapper[7146]: W0318 13:08:47.446219 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f16e797_a619_46a8_948a_9fdfc8a9891f.slice/crio-8399843cedb6bab01ec886526327375c49b0879072738456cf483b08091d6b2d WatchSource:0}: Error finding container 8399843cedb6bab01ec886526327375c49b0879072738456cf483b08091d6b2d: Status 404 returned error can't find the container with id 8399843cedb6bab01ec886526327375c49b0879072738456cf483b08091d6b2d Mar 18 13:08:47.571880 master-0 kubenswrapper[7146]: I0318 13:08:47.571816 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wl929"] Mar 18 13:08:47.572664 master-0 kubenswrapper[7146]: I0318 13:08:47.572629 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wl929" Mar 18 13:08:47.576620 master-0 kubenswrapper[7146]: I0318 13:08:47.576091 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 13:08:47.576620 master-0 kubenswrapper[7146]: I0318 13:08:47.576325 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 13:08:47.576620 master-0 kubenswrapper[7146]: I0318 13:08:47.576538 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 13:08:47.576769 master-0 kubenswrapper[7146]: I0318 13:08:47.576669 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 13:08:47.580266 master-0 kubenswrapper[7146]: I0318 13:08:47.580221 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wl929"] Mar 18 13:08:47.736813 master-0 kubenswrapper[7146]: I0318 13:08:47.736780 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7jz6\" (UniqueName: \"kubernetes.io/projected/4671673d-afa0-481f-b3a2-2c2b9441b6ce-kube-api-access-d7jz6\") pod \"dns-default-wl929\" (UID: \"4671673d-afa0-481f-b3a2-2c2b9441b6ce\") " pod="openshift-dns/dns-default-wl929" Mar 18 13:08:47.736971 master-0 kubenswrapper[7146]: I0318 13:08:47.736826 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4671673d-afa0-481f-b3a2-2c2b9441b6ce-metrics-tls\") pod \"dns-default-wl929\" (UID: \"4671673d-afa0-481f-b3a2-2c2b9441b6ce\") " pod="openshift-dns/dns-default-wl929" Mar 18 13:08:47.736971 master-0 kubenswrapper[7146]: I0318 13:08:47.736887 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4671673d-afa0-481f-b3a2-2c2b9441b6ce-config-volume\") pod \"dns-default-wl929\" (UID: \"4671673d-afa0-481f-b3a2-2c2b9441b6ce\") " pod="openshift-dns/dns-default-wl929" Mar 18 13:08:47.838818 master-0 kubenswrapper[7146]: I0318 13:08:47.838517 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4671673d-afa0-481f-b3a2-2c2b9441b6ce-config-volume\") pod \"dns-default-wl929\" (UID: \"4671673d-afa0-481f-b3a2-2c2b9441b6ce\") " pod="openshift-dns/dns-default-wl929" Mar 18 13:08:47.838990 master-0 kubenswrapper[7146]: I0318 13:08:47.838845 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7jz6\" (UniqueName: \"kubernetes.io/projected/4671673d-afa0-481f-b3a2-2c2b9441b6ce-kube-api-access-d7jz6\") pod \"dns-default-wl929\" (UID: \"4671673d-afa0-481f-b3a2-2c2b9441b6ce\") " pod="openshift-dns/dns-default-wl929" Mar 18 13:08:47.838990 master-0 kubenswrapper[7146]: I0318 13:08:47.838864 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4671673d-afa0-481f-b3a2-2c2b9441b6ce-metrics-tls\") pod \"dns-default-wl929\" (UID: \"4671673d-afa0-481f-b3a2-2c2b9441b6ce\") " pod="openshift-dns/dns-default-wl929" Mar 18 13:08:47.839517 master-0 kubenswrapper[7146]: I0318 13:08:47.839498 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4671673d-afa0-481f-b3a2-2c2b9441b6ce-config-volume\") pod \"dns-default-wl929\" (UID: \"4671673d-afa0-481f-b3a2-2c2b9441b6ce\") " pod="openshift-dns/dns-default-wl929" Mar 18 13:08:47.847715 master-0 kubenswrapper[7146]: I0318 13:08:47.847669 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4671673d-afa0-481f-b3a2-2c2b9441b6ce-metrics-tls\") pod \"dns-default-wl929\" (UID: \"4671673d-afa0-481f-b3a2-2c2b9441b6ce\") " pod="openshift-dns/dns-default-wl929" Mar 18 13:08:47.857833 master-0 kubenswrapper[7146]: I0318 13:08:47.857563 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7jz6\" (UniqueName: \"kubernetes.io/projected/4671673d-afa0-481f-b3a2-2c2b9441b6ce-kube-api-access-d7jz6\") pod \"dns-default-wl929\" (UID: \"4671673d-afa0-481f-b3a2-2c2b9441b6ce\") " pod="openshift-dns/dns-default-wl929" Mar 18 13:08:47.894826 master-0 kubenswrapper[7146]: I0318 13:08:47.894775 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wl929" Mar 18 13:08:48.065277 master-0 kubenswrapper[7146]: I0318 13:08:48.064946 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"91275a95-9707-4910-883a-f8e8e32bdd27","Type":"ContainerStarted","Data":"81ea2f92c5ad4885b30c35afa7735d7fdd39c6b3c6b9581f7c806f50e4fe8cf4"} Mar 18 13:08:48.065277 master-0 kubenswrapper[7146]: I0318 13:08:48.065275 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"91275a95-9707-4910-883a-f8e8e32bdd27","Type":"ContainerStarted","Data":"ca2c0895f80eba43705806f043dbbe23f06c2f321083332a16dd3cad3953e421"} Mar 18 13:08:48.089458 master-0 kubenswrapper[7146]: I0318 13:08:48.089319 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" event={"ID":"baeb6380-95e4-4e10-9798-e1e22f20bade","Type":"ContainerStarted","Data":"15bef2f9a820853bfb4950778e7197dd27cb2660a04cae29ebe1d39858cbb594"} Mar 18 13:08:48.089458 master-0 kubenswrapper[7146]: I0318 13:08:48.089368 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" event={"ID":"baeb6380-95e4-4e10-9798-e1e22f20bade","Type":"ContainerStarted","Data":"c8d0e68fce468a6cbf7a9e25b4e7afd1002b3dc75deb637dce883f568f47b361"} Mar 18 13:08:48.089458 master-0 kubenswrapper[7146]: I0318 13:08:48.089377 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" event={"ID":"baeb6380-95e4-4e10-9798-e1e22f20bade","Type":"ContainerStarted","Data":"7f19ee16fbfcf73db21dbee51bcb45264558bf405e040985a801120ef73b113c"} Mar 18 13:08:48.091221 master-0 kubenswrapper[7146]: I0318 13:08:48.089915 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:48.100602 master-0 kubenswrapper[7146]: I0318 13:08:48.100567 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-rlp78" event={"ID":"0f16e797-a619-46a8-948a-9fdfc8a9891f","Type":"ContainerStarted","Data":"d82b63bc9ee4dc92977617d31b98bee1ef4a0d1e4fb9a90de73e57d87c6f6c12"} Mar 18 13:08:48.100602 master-0 kubenswrapper[7146]: I0318 13:08:48.100601 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-rlp78" event={"ID":"0f16e797-a619-46a8-948a-9fdfc8a9891f","Type":"ContainerStarted","Data":"8399843cedb6bab01ec886526327375c49b0879072738456cf483b08091d6b2d"} Mar 18 13:08:48.117317 master-0 kubenswrapper[7146]: I0318 13:08:48.117239 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" event={"ID":"1ad93612-ab12-4b30-984f-119e1b924a84","Type":"ContainerStarted","Data":"7dfbe5ed23f58a4b2b795d3c941f199f4ff38f6453094d9db8bcf00a90c533d5"} Mar 18 13:08:48.128255 master-0 kubenswrapper[7146]: I0318 13:08:48.127200 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" event={"ID":"47f82c03-65d1-4a6c-ba09-8a00ae778009","Type":"ContainerStarted","Data":"6f29c4b1c1fd21881be5b0c8c3cbe035d4334c4ad23b7061f15e1ade0751024e"} Mar 18 13:08:48.131678 master-0 kubenswrapper[7146]: I0318 13:08:48.130811 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" event={"ID":"da6a763d-2777-40c4-ae1f-c77ced406ea2","Type":"ContainerStarted","Data":"c5843f44fab4faf7e89af94088f3163629dc4e663c58e22f6ca57b02a57f69f9"} Mar 18 13:08:48.133188 master-0 kubenswrapper[7146]: I0318 13:08:48.133158 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" event={"ID":"330df925-8429-4b96-9bfe-caa017c21afa","Type":"ContainerStarted","Data":"d7625d2cd327e3cafffe87f32286c7b0cc92c9be78c6e712456c0ec63d1a75aa"} Mar 18 13:08:48.135366 master-0 kubenswrapper[7146]: I0318 13:08:48.135348 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_fa67c544-d918-4ccf-a3a9-ffbfafe3c397/installer/0.log" Mar 18 13:08:48.135517 master-0 kubenswrapper[7146]: I0318 13:08:48.135496 7146 generic.go:334] "Generic (PLEG): container finished" podID="fa67c544-d918-4ccf-a3a9-ffbfafe3c397" containerID="bded498f4869da36f6141a4836243f37ed1fc3d30d4c5feaf7e6620fe927e251" exitCode=1 Mar 18 13:08:48.135673 master-0 kubenswrapper[7146]: I0318 13:08:48.135639 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"fa67c544-d918-4ccf-a3a9-ffbfafe3c397","Type":"ContainerDied","Data":"bded498f4869da36f6141a4836243f37ed1fc3d30d4c5feaf7e6620fe927e251"} Mar 18 13:08:48.137724 master-0 kubenswrapper[7146]: I0318 13:08:48.137484 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kq2j4" event={"ID":"5e691486-8540-4b79-8eed-b0fb829071db","Type":"ContainerStarted","Data":"bc0482eb4be6db452db71a2c46c144f5403bf6de42eee4937dbcaa45ae804557"} Mar 18 13:08:48.139020 master-0 kubenswrapper[7146]: I0318 13:08:48.138991 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" event={"ID":"36db10b8-33a2-4b54-85e2-9809eb6bc37d","Type":"ContainerStarted","Data":"4ffee40dc38a4798b5d24e82253a6829b54bff51963472b52cbe74d85cede668"} Mar 18 13:08:48.139106 master-0 kubenswrapper[7146]: I0318 13:08:48.139021 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" event={"ID":"36db10b8-33a2-4b54-85e2-9809eb6bc37d","Type":"ContainerStarted","Data":"8cfa9195fd91aaa41473c2e4d0c90829d891ed3f5c7a55b7f1376df3f2ef829a"} Mar 18 13:08:48.142730 master-0 kubenswrapper[7146]: I0318 13:08:48.142690 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" event={"ID":"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a","Type":"ContainerStarted","Data":"1b11c27d30a88dce52be62b64b735d1d6bfecce4c180c516c1ec00511c88a9cf"} Mar 18 13:08:48.145671 master-0 kubenswrapper[7146]: I0318 13:08:48.145622 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" event={"ID":"35925474-e3fe-4cff-aad6-d853816618c7","Type":"ContainerStarted","Data":"feef592bfb9171a37aa394c51fc21738e74cfa163f594aa5160554c22d6d35c6"} Mar 18 13:08:48.155156 master-0 kubenswrapper[7146]: I0318 13:08:48.148251 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" event={"ID":"906c0fd3-3bcd-4c6c-8505-b3517bae06b4","Type":"ContainerStarted","Data":"70c9337d8980b38a9bfe7fac6f297ccd5982f9e26f0d9055d4cd37b7726d2727"} Mar 18 13:08:48.155156 master-0 kubenswrapper[7146]: I0318 13:08:48.149821 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" event={"ID":"f2b92a53-0b61-4e1d-a306-f9a498e48b38","Type":"ContainerStarted","Data":"102aa84e4ce3e9436d0ca2ee9b9c8c9afea22715024408b5fc879ac8323a8114"} Mar 18 13:08:48.157167 master-0 kubenswrapper[7146]: I0318 13:08:48.156520 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" event={"ID":"234a5a6c-3790-49d0-b1e7-86f81048d96a","Type":"ContainerStarted","Data":"e421e24f0032092d372aa8567bf62089ec16fcc76e9db4714f59ae66d20632af"} Mar 18 13:08:48.157167 master-0 kubenswrapper[7146]: I0318 13:08:48.156557 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" event={"ID":"234a5a6c-3790-49d0-b1e7-86f81048d96a","Type":"ContainerStarted","Data":"ca6c08afb937ec1931bbca9f6da1d73b0f9f2d22aa67d305f5ea4119c463f3cf"} Mar 18 13:08:48.157167 master-0 kubenswrapper[7146]: I0318 13:08:48.156622 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:48.158390 master-0 kubenswrapper[7146]: I0318 13:08:48.157923 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" event={"ID":"ff67e258-b085-45a1-bfdf-7a87e2c9fc74","Type":"ContainerStarted","Data":"aea171dac2cdb15de31e259a633cbe7c7b58fffdce45f886918ab6cefb487098"} Mar 18 13:08:48.160031 master-0 kubenswrapper[7146]: I0318 13:08:48.159391 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" event={"ID":"b41c9132-92ef-429d-bdd5-9bdb024e04fc","Type":"ContainerStarted","Data":"fe76db3e18ee08aeb5e379f2dbbf7788ff4131f5c2267fbb53a962d2c960a57b"} Mar 18 13:08:48.200897 master-0 kubenswrapper[7146]: I0318 13:08:48.196973 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=2.196930745 podStartE2EDuration="2.196930745s" podCreationTimestamp="2026-03-18 13:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:08:48.09920071 +0000 UTC m=+36.907418071" watchObservedRunningTime="2026-03-18 13:08:48.196930745 +0000 UTC m=+37.005148116" Mar 18 13:08:48.200897 master-0 kubenswrapper[7146]: I0318 13:08:48.199995 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-slqms"] Mar 18 13:08:48.200897 master-0 kubenswrapper[7146]: I0318 13:08:48.200696 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-slqms" Mar 18 13:08:48.211395 master-0 kubenswrapper[7146]: I0318 13:08:48.202637 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" podStartSLOduration=8.202622284 podStartE2EDuration="8.202622284s" podCreationTimestamp="2026-03-18 13:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:08:48.195825164 +0000 UTC m=+37.004042535" watchObservedRunningTime="2026-03-18 13:08:48.202622284 +0000 UTC m=+37.010839645" Mar 18 13:08:48.219858 master-0 kubenswrapper[7146]: I0318 13:08:48.219829 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wl929"] Mar 18 13:08:48.462399 master-0 kubenswrapper[7146]: I0318 13:08:48.462008 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:48.462399 master-0 kubenswrapper[7146]: I0318 13:08:48.462057 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:48.464883 master-0 kubenswrapper[7146]: I0318 13:08:48.464709 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" podStartSLOduration=8.076345525 podStartE2EDuration="12.464686954s" podCreationTimestamp="2026-03-18 13:08:36 +0000 UTC" firstStartedPulling="2026-03-18 13:08:41.714176303 +0000 UTC m=+30.522393664" lastFinishedPulling="2026-03-18 13:08:46.102517732 +0000 UTC m=+34.910735093" observedRunningTime="2026-03-18 13:08:48.454991402 +0000 UTC m=+37.263208773" watchObservedRunningTime="2026-03-18 13:08:48.464686954 +0000 UTC m=+37.272904315" Mar 18 13:08:48.465831 master-0 kubenswrapper[7146]: I0318 13:08:48.465805 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 13:08:48.467173 master-0 kubenswrapper[7146]: I0318 13:08:48.466452 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 13:08:48.474254 master-0 kubenswrapper[7146]: I0318 13:08:48.468107 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 18 13:08:48.474254 master-0 kubenswrapper[7146]: I0318 13:08:48.470761 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kpz5\" (UniqueName: \"kubernetes.io/projected/8f59a12b-d690-44c5-972c-fb4b0b5819f1-kube-api-access-8kpz5\") pod \"node-resolver-slqms\" (UID: \"8f59a12b-d690-44c5-972c-fb4b0b5819f1\") " pod="openshift-dns/node-resolver-slqms" Mar 18 13:08:48.474254 master-0 kubenswrapper[7146]: I0318 13:08:48.471331 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8f59a12b-d690-44c5-972c-fb4b0b5819f1-hosts-file\") pod \"node-resolver-slqms\" (UID: \"8f59a12b-d690-44c5-972c-fb4b0b5819f1\") " pod="openshift-dns/node-resolver-slqms" Mar 18 13:08:48.489383 master-0 kubenswrapper[7146]: I0318 13:08:48.486587 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:48.489383 master-0 kubenswrapper[7146]: I0318 13:08:48.488770 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 13:08:48.544625 master-0 kubenswrapper[7146]: I0318 13:08:48.541401 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" podStartSLOduration=9.541386813 podStartE2EDuration="9.541386813s" podCreationTimestamp="2026-03-18 13:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:08:48.540827587 +0000 UTC m=+37.349044958" watchObservedRunningTime="2026-03-18 13:08:48.541386813 +0000 UTC m=+37.349604174" Mar 18 13:08:48.573021 master-0 kubenswrapper[7146]: I0318 13:08:48.572975 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kpz5\" (UniqueName: \"kubernetes.io/projected/8f59a12b-d690-44c5-972c-fb4b0b5819f1-kube-api-access-8kpz5\") pod \"node-resolver-slqms\" (UID: \"8f59a12b-d690-44c5-972c-fb4b0b5819f1\") " pod="openshift-dns/node-resolver-slqms" Mar 18 13:08:48.573181 master-0 kubenswrapper[7146]: I0318 13:08:48.573039 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f32b4d4d-df54-4fa7-a940-297e064fea44-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"f32b4d4d-df54-4fa7-a940-297e064fea44\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:08:48.573181 master-0 kubenswrapper[7146]: I0318 13:08:48.573080 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f32b4d4d-df54-4fa7-a940-297e064fea44-kube-api-access\") pod \"installer-1-master-0\" (UID: \"f32b4d4d-df54-4fa7-a940-297e064fea44\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:08:48.573291 master-0 kubenswrapper[7146]: I0318 13:08:48.573225 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8f59a12b-d690-44c5-972c-fb4b0b5819f1-hosts-file\") pod \"node-resolver-slqms\" (UID: \"8f59a12b-d690-44c5-972c-fb4b0b5819f1\") " pod="openshift-dns/node-resolver-slqms" Mar 18 13:08:48.573291 master-0 kubenswrapper[7146]: I0318 13:08:48.573248 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f32b4d4d-df54-4fa7-a940-297e064fea44-var-lock\") pod \"installer-1-master-0\" (UID: \"f32b4d4d-df54-4fa7-a940-297e064fea44\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:08:48.573481 master-0 kubenswrapper[7146]: I0318 13:08:48.573438 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8f59a12b-d690-44c5-972c-fb4b0b5819f1-hosts-file\") pod \"node-resolver-slqms\" (UID: \"8f59a12b-d690-44c5-972c-fb4b0b5819f1\") " pod="openshift-dns/node-resolver-slqms" Mar 18 13:08:48.613845 master-0 kubenswrapper[7146]: I0318 13:08:48.600344 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-rlp78" podStartSLOduration=2.600328824 podStartE2EDuration="2.600328824s" podCreationTimestamp="2026-03-18 13:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:08:48.599858881 +0000 UTC m=+37.408076242" watchObservedRunningTime="2026-03-18 13:08:48.600328824 +0000 UTC m=+37.408546185" Mar 18 13:08:48.630023 master-0 kubenswrapper[7146]: I0318 13:08:48.626260 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kpz5\" (UniqueName: \"kubernetes.io/projected/8f59a12b-d690-44c5-972c-fb4b0b5819f1-kube-api-access-8kpz5\") pod \"node-resolver-slqms\" (UID: \"8f59a12b-d690-44c5-972c-fb4b0b5819f1\") " pod="openshift-dns/node-resolver-slqms" Mar 18 13:08:48.668178 master-0 kubenswrapper[7146]: I0318 13:08:48.668127 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_fa67c544-d918-4ccf-a3a9-ffbfafe3c397/installer/0.log" Mar 18 13:08:48.668479 master-0 kubenswrapper[7146]: I0318 13:08:48.668222 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:08:48.686840 master-0 kubenswrapper[7146]: I0318 13:08:48.681728 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f32b4d4d-df54-4fa7-a940-297e064fea44-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"f32b4d4d-df54-4fa7-a940-297e064fea44\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:08:48.686840 master-0 kubenswrapper[7146]: I0318 13:08:48.681820 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f32b4d4d-df54-4fa7-a940-297e064fea44-kube-api-access\") pod \"installer-1-master-0\" (UID: \"f32b4d4d-df54-4fa7-a940-297e064fea44\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:08:48.686840 master-0 kubenswrapper[7146]: I0318 13:08:48.681863 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f32b4d4d-df54-4fa7-a940-297e064fea44-var-lock\") pod \"installer-1-master-0\" (UID: \"f32b4d4d-df54-4fa7-a940-297e064fea44\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:08:48.686840 master-0 kubenswrapper[7146]: I0318 13:08:48.681987 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f32b4d4d-df54-4fa7-a940-297e064fea44-var-lock\") pod \"installer-1-master-0\" (UID: \"f32b4d4d-df54-4fa7-a940-297e064fea44\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:08:48.686840 master-0 kubenswrapper[7146]: I0318 13:08:48.682035 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f32b4d4d-df54-4fa7-a940-297e064fea44-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"f32b4d4d-df54-4fa7-a940-297e064fea44\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:08:48.709865 master-0 kubenswrapper[7146]: I0318 13:08:48.709755 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f32b4d4d-df54-4fa7-a940-297e064fea44-kube-api-access\") pod \"installer-1-master-0\" (UID: \"f32b4d4d-df54-4fa7-a940-297e064fea44\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:08:48.783547 master-0 kubenswrapper[7146]: I0318 13:08:48.783407 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-kubelet-dir\") pod \"fa67c544-d918-4ccf-a3a9-ffbfafe3c397\" (UID: \"fa67c544-d918-4ccf-a3a9-ffbfafe3c397\") " Mar 18 13:08:48.783547 master-0 kubenswrapper[7146]: I0318 13:08:48.783529 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-kube-api-access\") pod \"fa67c544-d918-4ccf-a3a9-ffbfafe3c397\" (UID: \"fa67c544-d918-4ccf-a3a9-ffbfafe3c397\") " Mar 18 13:08:48.783826 master-0 kubenswrapper[7146]: I0318 13:08:48.783700 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-var-lock\") pod \"fa67c544-d918-4ccf-a3a9-ffbfafe3c397\" (UID: \"fa67c544-d918-4ccf-a3a9-ffbfafe3c397\") " Mar 18 13:08:48.784054 master-0 kubenswrapper[7146]: I0318 13:08:48.784012 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-var-lock" (OuterVolumeSpecName: "var-lock") pod "fa67c544-d918-4ccf-a3a9-ffbfafe3c397" (UID: "fa67c544-d918-4ccf-a3a9-ffbfafe3c397"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:48.784150 master-0 kubenswrapper[7146]: I0318 13:08:48.784074 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fa67c544-d918-4ccf-a3a9-ffbfafe3c397" (UID: "fa67c544-d918-4ccf-a3a9-ffbfafe3c397"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:48.789118 master-0 kubenswrapper[7146]: I0318 13:08:48.789057 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fa67c544-d918-4ccf-a3a9-ffbfafe3c397" (UID: "fa67c544-d918-4ccf-a3a9-ffbfafe3c397"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:08:48.804493 master-0 kubenswrapper[7146]: I0318 13:08:48.804405 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-slqms" Mar 18 13:08:48.829758 master-0 kubenswrapper[7146]: I0318 13:08:48.829719 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 13:08:48.835401 master-0 kubenswrapper[7146]: W0318 13:08:48.835336 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f59a12b_d690_44c5_972c_fb4b0b5819f1.slice/crio-85215d2477770d9146270870ad0a93b56946079eb831a104bd441b36e0111190 WatchSource:0}: Error finding container 85215d2477770d9146270870ad0a93b56946079eb831a104bd441b36e0111190: Status 404 returned error can't find the container with id 85215d2477770d9146270870ad0a93b56946079eb831a104bd441b36e0111190 Mar 18 13:08:48.886562 master-0 kubenswrapper[7146]: I0318 13:08:48.886513 7146 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:48.886562 master-0 kubenswrapper[7146]: I0318 13:08:48.886559 7146 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:48.886562 master-0 kubenswrapper[7146]: I0318 13:08:48.886571 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa67c544-d918-4ccf-a3a9-ffbfafe3c397-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:49.066093 master-0 kubenswrapper[7146]: I0318 13:08:49.064846 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 13:08:49.178957 master-0 kubenswrapper[7146]: I0318 13:08:49.178380 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-slqms" event={"ID":"8f59a12b-d690-44c5-972c-fb4b0b5819f1","Type":"ContainerStarted","Data":"85215d2477770d9146270870ad0a93b56946079eb831a104bd441b36e0111190"} Mar 18 13:08:49.180961 master-0 kubenswrapper[7146]: I0318 13:08:49.179741 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"f32b4d4d-df54-4fa7-a940-297e064fea44","Type":"ContainerStarted","Data":"1b06475f72c4aa178a3711e3bf8a803b73ed7bca27bffed7ac62aefe98506c3d"} Mar 18 13:08:49.181054 master-0 kubenswrapper[7146]: I0318 13:08:49.180979 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_fa67c544-d918-4ccf-a3a9-ffbfafe3c397/installer/0.log" Mar 18 13:08:49.181054 master-0 kubenswrapper[7146]: I0318 13:08:49.181021 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"fa67c544-d918-4ccf-a3a9-ffbfafe3c397","Type":"ContainerDied","Data":"c6af28036dfd96fda66b3c1620f4df30b654fca2fe40979dbdc8c5ac43a0865d"} Mar 18 13:08:49.181054 master-0 kubenswrapper[7146]: I0318 13:08:49.181044 7146 scope.go:117] "RemoveContainer" containerID="bded498f4869da36f6141a4836243f37ed1fc3d30d4c5feaf7e6620fe927e251" Mar 18 13:08:49.181167 master-0 kubenswrapper[7146]: I0318 13:08:49.181121 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:08:49.192574 master-0 kubenswrapper[7146]: I0318 13:08:49.185904 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wl929" event={"ID":"4671673d-afa0-481f-b3a2-2c2b9441b6ce","Type":"ContainerStarted","Data":"a2f9634bc26fc4102ec0a118fdd84688c4a5ae575980f29492ab02ddd33ee35a"} Mar 18 13:08:49.193147 master-0 kubenswrapper[7146]: I0318 13:08:49.192712 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:08:49.276296 master-0 kubenswrapper[7146]: I0318 13:08:49.275848 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 13:08:49.286006 master-0 kubenswrapper[7146]: I0318 13:08:49.285487 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 13:08:49.372325 master-0 kubenswrapper[7146]: I0318 13:08:49.372285 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa67c544-d918-4ccf-a3a9-ffbfafe3c397" path="/var/lib/kubelet/pods/fa67c544-d918-4ccf-a3a9-ffbfafe3c397/volumes" Mar 18 13:08:50.192015 master-0 kubenswrapper[7146]: I0318 13:08:50.191951 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-slqms" event={"ID":"8f59a12b-d690-44c5-972c-fb4b0b5819f1","Type":"ContainerStarted","Data":"5325343d60b154dda6ae42d4e3335ea88f9839bfe8422de28fad431b1f81c6c5"} Mar 18 13:08:50.199210 master-0 kubenswrapper[7146]: I0318 13:08:50.199130 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"f32b4d4d-df54-4fa7-a940-297e064fea44","Type":"ContainerStarted","Data":"94d2bc335ae0ececbd31f7ab13a8fd2ea166534945dafb090b610544f37ca4e7"} Mar 18 13:08:50.637615 master-0 kubenswrapper[7146]: I0318 13:08:50.636091 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=2.636071986 podStartE2EDuration="2.636071986s" podCreationTimestamp="2026-03-18 13:08:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:08:50.635449418 +0000 UTC m=+39.443666769" watchObservedRunningTime="2026-03-18 13:08:50.636071986 +0000 UTC m=+39.444289347" Mar 18 13:08:50.637615 master-0 kubenswrapper[7146]: I0318 13:08:50.637421 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-slqms" podStartSLOduration=2.637413533 podStartE2EDuration="2.637413533s" podCreationTimestamp="2026-03-18 13:08:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:08:50.6036906 +0000 UTC m=+39.411907961" watchObservedRunningTime="2026-03-18 13:08:50.637413533 +0000 UTC m=+39.445630914" Mar 18 13:08:52.122622 master-0 kubenswrapper[7146]: I0318 13:08:52.122323 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:08:52.676905 master-0 kubenswrapper[7146]: I0318 13:08:52.676237 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm"] Mar 18 13:08:52.676905 master-0 kubenswrapper[7146]: I0318 13:08:52.676468 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" podUID="162e25c0-761c-4414-8c29-f6931afdb7b2" containerName="cluster-version-operator" containerID="cri-o://e7673dd2ae43e04c7d4a1ececab611ded646b854263ba59081f823bf97a0caca" gracePeriod=130 Mar 18 13:08:53.106841 master-0 kubenswrapper[7146]: I0318 13:08:53.106809 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:53.211422 master-0 kubenswrapper[7146]: I0318 13:08:53.211377 7146 generic.go:334] "Generic (PLEG): container finished" podID="162e25c0-761c-4414-8c29-f6931afdb7b2" containerID="e7673dd2ae43e04c7d4a1ececab611ded646b854263ba59081f823bf97a0caca" exitCode=0 Mar 18 13:08:53.211817 master-0 kubenswrapper[7146]: I0318 13:08:53.211420 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" event={"ID":"162e25c0-761c-4414-8c29-f6931afdb7b2","Type":"ContainerDied","Data":"e7673dd2ae43e04c7d4a1ececab611ded646b854263ba59081f823bf97a0caca"} Mar 18 13:08:53.211817 master-0 kubenswrapper[7146]: I0318 13:08:53.211444 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" Mar 18 13:08:53.211817 master-0 kubenswrapper[7146]: I0318 13:08:53.211450 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm" event={"ID":"162e25c0-761c-4414-8c29-f6931afdb7b2","Type":"ContainerDied","Data":"b673179a522c0bd8e3a6cee919a5e39aa033f6535630e64de88d9e832bdf7a59"} Mar 18 13:08:53.211817 master-0 kubenswrapper[7146]: I0318 13:08:53.211472 7146 scope.go:117] "RemoveContainer" containerID="e7673dd2ae43e04c7d4a1ececab611ded646b854263ba59081f823bf97a0caca" Mar 18 13:08:53.223069 master-0 kubenswrapper[7146]: I0318 13:08:53.223024 7146 scope.go:117] "RemoveContainer" containerID="e7673dd2ae43e04c7d4a1ececab611ded646b854263ba59081f823bf97a0caca" Mar 18 13:08:53.223558 master-0 kubenswrapper[7146]: E0318 13:08:53.223511 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7673dd2ae43e04c7d4a1ececab611ded646b854263ba59081f823bf97a0caca\": container with ID starting with e7673dd2ae43e04c7d4a1ececab611ded646b854263ba59081f823bf97a0caca not found: ID does not exist" containerID="e7673dd2ae43e04c7d4a1ececab611ded646b854263ba59081f823bf97a0caca" Mar 18 13:08:53.223711 master-0 kubenswrapper[7146]: I0318 13:08:53.223569 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7673dd2ae43e04c7d4a1ececab611ded646b854263ba59081f823bf97a0caca"} err="failed to get container status \"e7673dd2ae43e04c7d4a1ececab611ded646b854263ba59081f823bf97a0caca\": rpc error: code = NotFound desc = could not find container \"e7673dd2ae43e04c7d4a1ececab611ded646b854263ba59081f823bf97a0caca\": container with ID starting with e7673dd2ae43e04c7d4a1ececab611ded646b854263ba59081f823bf97a0caca not found: ID does not exist" Mar 18 13:08:53.269169 master-0 kubenswrapper[7146]: I0318 13:08:53.269090 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-ssl-certs\") pod \"162e25c0-761c-4414-8c29-f6931afdb7b2\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " Mar 18 13:08:53.269169 master-0 kubenswrapper[7146]: I0318 13:08:53.269185 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/162e25c0-761c-4414-8c29-f6931afdb7b2-kube-api-access\") pod \"162e25c0-761c-4414-8c29-f6931afdb7b2\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " Mar 18 13:08:53.269478 master-0 kubenswrapper[7146]: I0318 13:08:53.269230 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-cvo-updatepayloads\") pod \"162e25c0-761c-4414-8c29-f6931afdb7b2\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " Mar 18 13:08:53.269478 master-0 kubenswrapper[7146]: I0318 13:08:53.269235 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "162e25c0-761c-4414-8c29-f6931afdb7b2" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:53.269478 master-0 kubenswrapper[7146]: I0318 13:08:53.269267 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") pod \"162e25c0-761c-4414-8c29-f6931afdb7b2\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " Mar 18 13:08:53.269478 master-0 kubenswrapper[7146]: I0318 13:08:53.269400 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "162e25c0-761c-4414-8c29-f6931afdb7b2" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:53.269478 master-0 kubenswrapper[7146]: I0318 13:08:53.269433 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/162e25c0-761c-4414-8c29-f6931afdb7b2-service-ca\") pod \"162e25c0-761c-4414-8c29-f6931afdb7b2\" (UID: \"162e25c0-761c-4414-8c29-f6931afdb7b2\") " Mar 18 13:08:53.269802 master-0 kubenswrapper[7146]: I0318 13:08:53.269769 7146 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:53.269802 master-0 kubenswrapper[7146]: I0318 13:08:53.269796 7146 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/162e25c0-761c-4414-8c29-f6931afdb7b2-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:53.270169 master-0 kubenswrapper[7146]: I0318 13:08:53.270129 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/162e25c0-761c-4414-8c29-f6931afdb7b2-service-ca" (OuterVolumeSpecName: "service-ca") pod "162e25c0-761c-4414-8c29-f6931afdb7b2" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:08:53.273883 master-0 kubenswrapper[7146]: I0318 13:08:53.273840 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "162e25c0-761c-4414-8c29-f6931afdb7b2" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:08:53.273883 master-0 kubenswrapper[7146]: I0318 13:08:53.273870 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/162e25c0-761c-4414-8c29-f6931afdb7b2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "162e25c0-761c-4414-8c29-f6931afdb7b2" (UID: "162e25c0-761c-4414-8c29-f6931afdb7b2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:08:53.384052 master-0 kubenswrapper[7146]: I0318 13:08:53.383792 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/162e25c0-761c-4414-8c29-f6931afdb7b2-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:53.384052 master-0 kubenswrapper[7146]: I0318 13:08:53.383872 7146 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/162e25c0-761c-4414-8c29-f6931afdb7b2-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:53.384052 master-0 kubenswrapper[7146]: I0318 13:08:53.383887 7146 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/162e25c0-761c-4414-8c29-f6931afdb7b2-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:53.529020 master-0 kubenswrapper[7146]: I0318 13:08:53.528914 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm"] Mar 18 13:08:53.531730 master-0 kubenswrapper[7146]: I0318 13:08:53.531702 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-l6hzm"] Mar 18 13:08:53.559216 master-0 kubenswrapper[7146]: I0318 13:08:53.559168 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn"] Mar 18 13:08:53.559440 master-0 kubenswrapper[7146]: E0318 13:08:53.559351 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa67c544-d918-4ccf-a3a9-ffbfafe3c397" containerName="installer" Mar 18 13:08:53.559440 master-0 kubenswrapper[7146]: I0318 13:08:53.559362 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa67c544-d918-4ccf-a3a9-ffbfafe3c397" containerName="installer" Mar 18 13:08:53.559440 master-0 kubenswrapper[7146]: E0318 13:08:53.559374 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="162e25c0-761c-4414-8c29-f6931afdb7b2" containerName="cluster-version-operator" Mar 18 13:08:53.559440 master-0 kubenswrapper[7146]: I0318 13:08:53.559380 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="162e25c0-761c-4414-8c29-f6931afdb7b2" containerName="cluster-version-operator" Mar 18 13:08:53.559440 master-0 kubenswrapper[7146]: I0318 13:08:53.559444 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="162e25c0-761c-4414-8c29-f6931afdb7b2" containerName="cluster-version-operator" Mar 18 13:08:53.559823 master-0 kubenswrapper[7146]: I0318 13:08:53.559454 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa67c544-d918-4ccf-a3a9-ffbfafe3c397" containerName="installer" Mar 18 13:08:53.559823 master-0 kubenswrapper[7146]: I0318 13:08:53.559739 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:08:53.570026 master-0 kubenswrapper[7146]: I0318 13:08:53.567123 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 13:08:53.570026 master-0 kubenswrapper[7146]: I0318 13:08:53.567370 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 13:08:53.570026 master-0 kubenswrapper[7146]: I0318 13:08:53.567518 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 13:08:53.586319 master-0 kubenswrapper[7146]: I0318 13:08:53.585675 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e4d0b174-33e4-46ee-863b-b5cc2a271b85-service-ca\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:08:53.586319 master-0 kubenswrapper[7146]: I0318 13:08:53.585801 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4d0b174-33e4-46ee-863b-b5cc2a271b85-kube-api-access\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:08:53.586319 master-0 kubenswrapper[7146]: I0318 13:08:53.585829 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d0b174-33e4-46ee-863b-b5cc2a271b85-serving-cert\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:08:53.586319 master-0 kubenswrapper[7146]: I0318 13:08:53.585867 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e4d0b174-33e4-46ee-863b-b5cc2a271b85-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:08:53.586319 master-0 kubenswrapper[7146]: I0318 13:08:53.585891 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e4d0b174-33e4-46ee-863b-b5cc2a271b85-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:08:53.686720 master-0 kubenswrapper[7146]: I0318 13:08:53.686603 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4d0b174-33e4-46ee-863b-b5cc2a271b85-kube-api-access\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:08:53.686720 master-0 kubenswrapper[7146]: I0318 13:08:53.686666 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d0b174-33e4-46ee-863b-b5cc2a271b85-serving-cert\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:08:53.686720 master-0 kubenswrapper[7146]: I0318 13:08:53.686703 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e4d0b174-33e4-46ee-863b-b5cc2a271b85-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:08:53.686956 master-0 kubenswrapper[7146]: I0318 13:08:53.686726 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e4d0b174-33e4-46ee-863b-b5cc2a271b85-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:08:53.686956 master-0 kubenswrapper[7146]: I0318 13:08:53.686762 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e4d0b174-33e4-46ee-863b-b5cc2a271b85-service-ca\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:08:53.687777 master-0 kubenswrapper[7146]: I0318 13:08:53.687747 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e4d0b174-33e4-46ee-863b-b5cc2a271b85-service-ca\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:08:53.688254 master-0 kubenswrapper[7146]: I0318 13:08:53.688164 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e4d0b174-33e4-46ee-863b-b5cc2a271b85-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:08:53.688254 master-0 kubenswrapper[7146]: I0318 13:08:53.688234 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e4d0b174-33e4-46ee-863b-b5cc2a271b85-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:08:53.697278 master-0 kubenswrapper[7146]: I0318 13:08:53.697239 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d0b174-33e4-46ee-863b-b5cc2a271b85-serving-cert\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:08:53.708886 master-0 kubenswrapper[7146]: I0318 13:08:53.708845 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4d0b174-33e4-46ee-863b-b5cc2a271b85-kube-api-access\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:08:53.887797 master-0 kubenswrapper[7146]: I0318 13:08:53.887743 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:08:54.223481 master-0 kubenswrapper[7146]: I0318 13:08:54.223412 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" event={"ID":"ce73daa8-f853-4bcf-b70c-f352917c589e","Type":"ContainerStarted","Data":"9b1f8b2519f2a9a6195b1d9575ece9a51165b5ceb46db81c6d1094a81729658e"} Mar 18 13:08:54.223481 master-0 kubenswrapper[7146]: I0318 13:08:54.223441 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" podUID="ce73daa8-f853-4bcf-b70c-f352917c589e" containerName="route-controller-manager" containerID="cri-o://9b1f8b2519f2a9a6195b1d9575ece9a51165b5ceb46db81c6d1094a81729658e" gracePeriod=30 Mar 18 13:08:54.225224 master-0 kubenswrapper[7146]: I0318 13:08:54.223825 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:54.232096 master-0 kubenswrapper[7146]: I0318 13:08:54.232058 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:54.244156 master-0 kubenswrapper[7146]: I0318 13:08:54.242806 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" podStartSLOduration=9.124908514 podStartE2EDuration="16.242791131s" podCreationTimestamp="2026-03-18 13:08:38 +0000 UTC" firstStartedPulling="2026-03-18 13:08:45.993314016 +0000 UTC m=+34.801531377" lastFinishedPulling="2026-03-18 13:08:53.111196633 +0000 UTC m=+41.919413994" observedRunningTime="2026-03-18 13:08:54.238915383 +0000 UTC m=+43.047132744" watchObservedRunningTime="2026-03-18 13:08:54.242791131 +0000 UTC m=+43.051008482" Mar 18 13:08:54.427854 master-0 kubenswrapper[7146]: I0318 13:08:54.427718 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 13:08:54.462504 master-0 kubenswrapper[7146]: I0318 13:08:54.429451 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:08:54.466851 master-0 kubenswrapper[7146]: I0318 13:08:54.466061 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 13:08:54.466851 master-0 kubenswrapper[7146]: I0318 13:08:54.466332 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 13:08:54.496125 master-0 kubenswrapper[7146]: I0318 13:08:54.496082 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88cd8323-8857-41fe-85d4-e6064330ec71-kube-api-access\") pod \"installer-1-master-0\" (UID: \"88cd8323-8857-41fe-85d4-e6064330ec71\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:08:54.496246 master-0 kubenswrapper[7146]: I0318 13:08:54.496140 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88cd8323-8857-41fe-85d4-e6064330ec71-var-lock\") pod \"installer-1-master-0\" (UID: \"88cd8323-8857-41fe-85d4-e6064330ec71\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:08:54.496316 master-0 kubenswrapper[7146]: I0318 13:08:54.496265 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88cd8323-8857-41fe-85d4-e6064330ec71-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"88cd8323-8857-41fe-85d4-e6064330ec71\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:08:54.577787 master-0 kubenswrapper[7146]: I0318 13:08:54.577722 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 13:08:54.578058 master-0 kubenswrapper[7146]: I0318 13:08:54.577962 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="91275a95-9707-4910-883a-f8e8e32bdd27" containerName="installer" containerID="cri-o://81ea2f92c5ad4885b30c35afa7735d7fdd39c6b3c6b9581f7c806f50e4fe8cf4" gracePeriod=30 Mar 18 13:08:54.602129 master-0 kubenswrapper[7146]: I0318 13:08:54.596610 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88cd8323-8857-41fe-85d4-e6064330ec71-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"88cd8323-8857-41fe-85d4-e6064330ec71\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:08:54.602129 master-0 kubenswrapper[7146]: I0318 13:08:54.596683 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88cd8323-8857-41fe-85d4-e6064330ec71-kube-api-access\") pod \"installer-1-master-0\" (UID: \"88cd8323-8857-41fe-85d4-e6064330ec71\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:08:54.602129 master-0 kubenswrapper[7146]: I0318 13:08:54.596704 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88cd8323-8857-41fe-85d4-e6064330ec71-var-lock\") pod \"installer-1-master-0\" (UID: \"88cd8323-8857-41fe-85d4-e6064330ec71\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:08:54.602129 master-0 kubenswrapper[7146]: I0318 13:08:54.596760 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88cd8323-8857-41fe-85d4-e6064330ec71-var-lock\") pod \"installer-1-master-0\" (UID: \"88cd8323-8857-41fe-85d4-e6064330ec71\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:08:54.602129 master-0 kubenswrapper[7146]: I0318 13:08:54.596794 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88cd8323-8857-41fe-85d4-e6064330ec71-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"88cd8323-8857-41fe-85d4-e6064330ec71\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:08:54.620881 master-0 kubenswrapper[7146]: I0318 13:08:54.620845 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88cd8323-8857-41fe-85d4-e6064330ec71-kube-api-access\") pod \"installer-1-master-0\" (UID: \"88cd8323-8857-41fe-85d4-e6064330ec71\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:08:54.797252 master-0 kubenswrapper[7146]: I0318 13:08:54.797126 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:08:55.233233 master-0 kubenswrapper[7146]: I0318 13:08:55.233148 7146 generic.go:334] "Generic (PLEG): container finished" podID="ce73daa8-f853-4bcf-b70c-f352917c589e" containerID="9b1f8b2519f2a9a6195b1d9575ece9a51165b5ceb46db81c6d1094a81729658e" exitCode=0 Mar 18 13:08:55.233233 master-0 kubenswrapper[7146]: I0318 13:08:55.233200 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" event={"ID":"ce73daa8-f853-4bcf-b70c-f352917c589e","Type":"ContainerDied","Data":"9b1f8b2519f2a9a6195b1d9575ece9a51165b5ceb46db81c6d1094a81729658e"} Mar 18 13:08:55.365248 master-0 kubenswrapper[7146]: I0318 13:08:55.365127 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="162e25c0-761c-4414-8c29-f6931afdb7b2" path="/var/lib/kubelet/pods/162e25c0-761c-4414-8c29-f6931afdb7b2/volumes" Mar 18 13:08:55.726639 master-0 kubenswrapper[7146]: W0318 13:08:55.726521 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4d0b174_33e4_46ee_863b_b5cc2a271b85.slice/crio-dbe3dbed53c9224d7868cbcc5f61d9bc6a0fe24d17380d115f4b59ffe8620443 WatchSource:0}: Error finding container dbe3dbed53c9224d7868cbcc5f61d9bc6a0fe24d17380d115f4b59ffe8620443: Status 404 returned error can't find the container with id dbe3dbed53c9224d7868cbcc5f61d9bc6a0fe24d17380d115f4b59ffe8620443 Mar 18 13:08:55.757882 master-0 kubenswrapper[7146]: I0318 13:08:55.757857 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:55.809048 master-0 kubenswrapper[7146]: I0318 13:08:55.808573 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce73daa8-f853-4bcf-b70c-f352917c589e-config\") pod \"ce73daa8-f853-4bcf-b70c-f352917c589e\" (UID: \"ce73daa8-f853-4bcf-b70c-f352917c589e\") " Mar 18 13:08:55.809048 master-0 kubenswrapper[7146]: I0318 13:08:55.808639 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce73daa8-f853-4bcf-b70c-f352917c589e-client-ca\") pod \"ce73daa8-f853-4bcf-b70c-f352917c589e\" (UID: \"ce73daa8-f853-4bcf-b70c-f352917c589e\") " Mar 18 13:08:55.809048 master-0 kubenswrapper[7146]: I0318 13:08:55.808678 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce73daa8-f853-4bcf-b70c-f352917c589e-serving-cert\") pod \"ce73daa8-f853-4bcf-b70c-f352917c589e\" (UID: \"ce73daa8-f853-4bcf-b70c-f352917c589e\") " Mar 18 13:08:55.809048 master-0 kubenswrapper[7146]: I0318 13:08:55.808708 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qn58\" (UniqueName: \"kubernetes.io/projected/ce73daa8-f853-4bcf-b70c-f352917c589e-kube-api-access-2qn58\") pod \"ce73daa8-f853-4bcf-b70c-f352917c589e\" (UID: \"ce73daa8-f853-4bcf-b70c-f352917c589e\") " Mar 18 13:08:55.810804 master-0 kubenswrapper[7146]: I0318 13:08:55.810762 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce73daa8-f853-4bcf-b70c-f352917c589e-client-ca" (OuterVolumeSpecName: "client-ca") pod "ce73daa8-f853-4bcf-b70c-f352917c589e" (UID: "ce73daa8-f853-4bcf-b70c-f352917c589e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:08:55.811561 master-0 kubenswrapper[7146]: I0318 13:08:55.811458 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce73daa8-f853-4bcf-b70c-f352917c589e-config" (OuterVolumeSpecName: "config") pod "ce73daa8-f853-4bcf-b70c-f352917c589e" (UID: "ce73daa8-f853-4bcf-b70c-f352917c589e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:08:55.815495 master-0 kubenswrapper[7146]: I0318 13:08:55.814158 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce73daa8-f853-4bcf-b70c-f352917c589e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ce73daa8-f853-4bcf-b70c-f352917c589e" (UID: "ce73daa8-f853-4bcf-b70c-f352917c589e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:08:55.821710 master-0 kubenswrapper[7146]: I0318 13:08:55.821641 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce73daa8-f853-4bcf-b70c-f352917c589e-kube-api-access-2qn58" (OuterVolumeSpecName: "kube-api-access-2qn58") pod "ce73daa8-f853-4bcf-b70c-f352917c589e" (UID: "ce73daa8-f853-4bcf-b70c-f352917c589e"). InnerVolumeSpecName "kube-api-access-2qn58". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:08:55.909664 master-0 kubenswrapper[7146]: I0318 13:08:55.909616 7146 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce73daa8-f853-4bcf-b70c-f352917c589e-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:55.909664 master-0 kubenswrapper[7146]: I0318 13:08:55.909655 7146 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce73daa8-f853-4bcf-b70c-f352917c589e-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:55.909664 master-0 kubenswrapper[7146]: I0318 13:08:55.909665 7146 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce73daa8-f853-4bcf-b70c-f352917c589e-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:55.909664 master-0 kubenswrapper[7146]: I0318 13:08:55.909674 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qn58\" (UniqueName: \"kubernetes.io/projected/ce73daa8-f853-4bcf-b70c-f352917c589e-kube-api-access-2qn58\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:56.239830 master-0 kubenswrapper[7146]: I0318 13:08:56.239764 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" event={"ID":"ce73daa8-f853-4bcf-b70c-f352917c589e","Type":"ContainerDied","Data":"fc01b02f1e83431c1bb5f60d025650155b7f5d920d4dde29ad7bb0144c01f615"} Mar 18 13:08:56.239830 master-0 kubenswrapper[7146]: I0318 13:08:56.239792 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47" Mar 18 13:08:56.239830 master-0 kubenswrapper[7146]: I0318 13:08:56.239837 7146 scope.go:117] "RemoveContainer" containerID="9b1f8b2519f2a9a6195b1d9575ece9a51165b5ceb46db81c6d1094a81729658e" Mar 18 13:08:56.242779 master-0 kubenswrapper[7146]: I0318 13:08:56.242752 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_91275a95-9707-4910-883a-f8e8e32bdd27/installer/0.log" Mar 18 13:08:56.242843 master-0 kubenswrapper[7146]: I0318 13:08:56.242811 7146 generic.go:334] "Generic (PLEG): container finished" podID="91275a95-9707-4910-883a-f8e8e32bdd27" containerID="81ea2f92c5ad4885b30c35afa7735d7fdd39c6b3c6b9581f7c806f50e4fe8cf4" exitCode=1 Mar 18 13:08:56.242908 master-0 kubenswrapper[7146]: I0318 13:08:56.242883 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"91275a95-9707-4910-883a-f8e8e32bdd27","Type":"ContainerDied","Data":"81ea2f92c5ad4885b30c35afa7735d7fdd39c6b3c6b9581f7c806f50e4fe8cf4"} Mar 18 13:08:56.243807 master-0 kubenswrapper[7146]: I0318 13:08:56.243785 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" event={"ID":"e4d0b174-33e4-46ee-863b-b5cc2a271b85","Type":"ContainerStarted","Data":"dbe3dbed53c9224d7868cbcc5f61d9bc6a0fe24d17380d115f4b59ffe8620443"} Mar 18 13:08:56.272175 master-0 kubenswrapper[7146]: I0318 13:08:56.272131 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47"] Mar 18 13:08:56.273802 master-0 kubenswrapper[7146]: I0318 13:08:56.273777 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc4dcd8b-d7b47"] Mar 18 13:08:56.973436 master-0 kubenswrapper[7146]: I0318 13:08:56.973363 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 13:08:56.973675 master-0 kubenswrapper[7146]: E0318 13:08:56.973541 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce73daa8-f853-4bcf-b70c-f352917c589e" containerName="route-controller-manager" Mar 18 13:08:56.973675 master-0 kubenswrapper[7146]: I0318 13:08:56.973553 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce73daa8-f853-4bcf-b70c-f352917c589e" containerName="route-controller-manager" Mar 18 13:08:56.973675 master-0 kubenswrapper[7146]: I0318 13:08:56.973652 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce73daa8-f853-4bcf-b70c-f352917c589e" containerName="route-controller-manager" Mar 18 13:08:56.974010 master-0 kubenswrapper[7146]: I0318 13:08:56.973979 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:08:56.985695 master-0 kubenswrapper[7146]: I0318 13:08:56.985645 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 13:08:57.023084 master-0 kubenswrapper[7146]: I0318 13:08:57.023040 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6fab6cf-3b8f-47a6-837a-319049f487e3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d6fab6cf-3b8f-47a6-837a-319049f487e3\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:08:57.023293 master-0 kubenswrapper[7146]: I0318 13:08:57.023130 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6fab6cf-3b8f-47a6-837a-319049f487e3-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"d6fab6cf-3b8f-47a6-837a-319049f487e3\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:08:57.023293 master-0 kubenswrapper[7146]: I0318 13:08:57.023157 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d6fab6cf-3b8f-47a6-837a-319049f487e3-var-lock\") pod \"installer-3-master-0\" (UID: \"d6fab6cf-3b8f-47a6-837a-319049f487e3\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:08:57.123969 master-0 kubenswrapper[7146]: I0318 13:08:57.123707 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6fab6cf-3b8f-47a6-837a-319049f487e3-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"d6fab6cf-3b8f-47a6-837a-319049f487e3\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:08:57.123969 master-0 kubenswrapper[7146]: I0318 13:08:57.123760 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d6fab6cf-3b8f-47a6-837a-319049f487e3-var-lock\") pod \"installer-3-master-0\" (UID: \"d6fab6cf-3b8f-47a6-837a-319049f487e3\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:08:57.123969 master-0 kubenswrapper[7146]: I0318 13:08:57.123812 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d6fab6cf-3b8f-47a6-837a-319049f487e3-var-lock\") pod \"installer-3-master-0\" (UID: \"d6fab6cf-3b8f-47a6-837a-319049f487e3\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:08:57.123969 master-0 kubenswrapper[7146]: I0318 13:08:57.123872 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6fab6cf-3b8f-47a6-837a-319049f487e3-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"d6fab6cf-3b8f-47a6-837a-319049f487e3\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:08:57.123969 master-0 kubenswrapper[7146]: I0318 13:08:57.123963 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6fab6cf-3b8f-47a6-837a-319049f487e3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d6fab6cf-3b8f-47a6-837a-319049f487e3\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:08:57.138525 master-0 kubenswrapper[7146]: I0318 13:08:57.138439 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6fab6cf-3b8f-47a6-837a-319049f487e3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d6fab6cf-3b8f-47a6-837a-319049f487e3\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:08:57.365064 master-0 kubenswrapper[7146]: I0318 13:08:57.364703 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce73daa8-f853-4bcf-b70c-f352917c589e" path="/var/lib/kubelet/pods/ce73daa8-f853-4bcf-b70c-f352917c589e/volumes" Mar 18 13:08:57.372364 master-0 kubenswrapper[7146]: I0318 13:08:57.372330 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:08:58.405315 master-0 kubenswrapper[7146]: I0318 13:08:58.405270 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq"] Mar 18 13:08:58.405811 master-0 kubenswrapper[7146]: I0318 13:08:58.405784 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:08:58.415437 master-0 kubenswrapper[7146]: I0318 13:08:58.410571 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 13:08:58.415437 master-0 kubenswrapper[7146]: I0318 13:08:58.410821 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 13:08:58.415437 master-0 kubenswrapper[7146]: I0318 13:08:58.410823 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 13:08:58.415437 master-0 kubenswrapper[7146]: I0318 13:08:58.410998 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 13:08:58.415437 master-0 kubenswrapper[7146]: I0318 13:08:58.412158 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 13:08:58.431100 master-0 kubenswrapper[7146]: I0318 13:08:58.430955 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq"] Mar 18 13:08:58.442276 master-0 kubenswrapper[7146]: I0318 13:08:58.442234 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_91275a95-9707-4910-883a-f8e8e32bdd27/installer/0.log" Mar 18 13:08:58.442457 master-0 kubenswrapper[7146]: I0318 13:08:58.442305 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:08:58.540121 master-0 kubenswrapper[7146]: I0318 13:08:58.540048 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-client-ca\") pod \"route-controller-manager-68f97cf79f-trbrq\" (UID: \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\") " pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:08:58.540354 master-0 kubenswrapper[7146]: I0318 13:08:58.540261 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-config\") pod \"route-controller-manager-68f97cf79f-trbrq\" (UID: \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\") " pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:08:58.540545 master-0 kubenswrapper[7146]: I0318 13:08:58.540483 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6nw6\" (UniqueName: \"kubernetes.io/projected/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-kube-api-access-m6nw6\") pod \"route-controller-manager-68f97cf79f-trbrq\" (UID: \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\") " pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:08:58.541003 master-0 kubenswrapper[7146]: I0318 13:08:58.540929 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-serving-cert\") pod \"route-controller-manager-68f97cf79f-trbrq\" (UID: \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\") " pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:08:58.643100 master-0 kubenswrapper[7146]: I0318 13:08:58.642046 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91275a95-9707-4910-883a-f8e8e32bdd27-kubelet-dir\") pod \"91275a95-9707-4910-883a-f8e8e32bdd27\" (UID: \"91275a95-9707-4910-883a-f8e8e32bdd27\") " Mar 18 13:08:58.643100 master-0 kubenswrapper[7146]: I0318 13:08:58.642138 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91275a95-9707-4910-883a-f8e8e32bdd27-kube-api-access\") pod \"91275a95-9707-4910-883a-f8e8e32bdd27\" (UID: \"91275a95-9707-4910-883a-f8e8e32bdd27\") " Mar 18 13:08:58.643100 master-0 kubenswrapper[7146]: I0318 13:08:58.642203 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/91275a95-9707-4910-883a-f8e8e32bdd27-var-lock\") pod \"91275a95-9707-4910-883a-f8e8e32bdd27\" (UID: \"91275a95-9707-4910-883a-f8e8e32bdd27\") " Mar 18 13:08:58.643100 master-0 kubenswrapper[7146]: I0318 13:08:58.642321 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91275a95-9707-4910-883a-f8e8e32bdd27-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "91275a95-9707-4910-883a-f8e8e32bdd27" (UID: "91275a95-9707-4910-883a-f8e8e32bdd27"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:58.643100 master-0 kubenswrapper[7146]: I0318 13:08:58.642422 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-serving-cert\") pod \"route-controller-manager-68f97cf79f-trbrq\" (UID: \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\") " pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:08:58.643100 master-0 kubenswrapper[7146]: I0318 13:08:58.642484 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-client-ca\") pod \"route-controller-manager-68f97cf79f-trbrq\" (UID: \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\") " pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:08:58.643100 master-0 kubenswrapper[7146]: I0318 13:08:58.642529 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-config\") pod \"route-controller-manager-68f97cf79f-trbrq\" (UID: \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\") " pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:08:58.643100 master-0 kubenswrapper[7146]: I0318 13:08:58.642557 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6nw6\" (UniqueName: \"kubernetes.io/projected/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-kube-api-access-m6nw6\") pod \"route-controller-manager-68f97cf79f-trbrq\" (UID: \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\") " pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:08:58.643100 master-0 kubenswrapper[7146]: I0318 13:08:58.642606 7146 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91275a95-9707-4910-883a-f8e8e32bdd27-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:58.643100 master-0 kubenswrapper[7146]: I0318 13:08:58.642885 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91275a95-9707-4910-883a-f8e8e32bdd27-var-lock" (OuterVolumeSpecName: "var-lock") pod "91275a95-9707-4910-883a-f8e8e32bdd27" (UID: "91275a95-9707-4910-883a-f8e8e32bdd27"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:58.643797 master-0 kubenswrapper[7146]: I0318 13:08:58.643759 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-client-ca\") pod \"route-controller-manager-68f97cf79f-trbrq\" (UID: \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\") " pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:08:58.644333 master-0 kubenswrapper[7146]: I0318 13:08:58.644205 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-config\") pod \"route-controller-manager-68f97cf79f-trbrq\" (UID: \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\") " pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:08:58.646834 master-0 kubenswrapper[7146]: I0318 13:08:58.646490 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-serving-cert\") pod \"route-controller-manager-68f97cf79f-trbrq\" (UID: \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\") " pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:08:58.647083 master-0 kubenswrapper[7146]: I0318 13:08:58.647045 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91275a95-9707-4910-883a-f8e8e32bdd27-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "91275a95-9707-4910-883a-f8e8e32bdd27" (UID: "91275a95-9707-4910-883a-f8e8e32bdd27"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:08:58.658334 master-0 kubenswrapper[7146]: I0318 13:08:58.658289 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6nw6\" (UniqueName: \"kubernetes.io/projected/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-kube-api-access-m6nw6\") pod \"route-controller-manager-68f97cf79f-trbrq\" (UID: \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\") " pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:08:58.743226 master-0 kubenswrapper[7146]: I0318 13:08:58.743179 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91275a95-9707-4910-883a-f8e8e32bdd27-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:58.743226 master-0 kubenswrapper[7146]: I0318 13:08:58.743223 7146 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/91275a95-9707-4910-883a-f8e8e32bdd27-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:58.764284 master-0 kubenswrapper[7146]: I0318 13:08:58.764242 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:08:58.802758 master-0 kubenswrapper[7146]: I0318 13:08:58.802695 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 13:08:58.802981 master-0 kubenswrapper[7146]: E0318 13:08:58.802914 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91275a95-9707-4910-883a-f8e8e32bdd27" containerName="installer" Mar 18 13:08:58.802981 master-0 kubenswrapper[7146]: I0318 13:08:58.802929 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="91275a95-9707-4910-883a-f8e8e32bdd27" containerName="installer" Mar 18 13:08:58.803098 master-0 kubenswrapper[7146]: I0318 13:08:58.803071 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="91275a95-9707-4910-883a-f8e8e32bdd27" containerName="installer" Mar 18 13:08:58.803499 master-0 kubenswrapper[7146]: I0318 13:08:58.803471 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:08:58.806049 master-0 kubenswrapper[7146]: I0318 13:08:58.805316 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 13:08:58.810655 master-0 kubenswrapper[7146]: I0318 13:08:58.810616 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 13:08:58.945066 master-0 kubenswrapper[7146]: I0318 13:08:58.944971 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82d33ac9-1471-47c5-802c-c267e7c1694f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"82d33ac9-1471-47c5-802c-c267e7c1694f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:08:58.945066 master-0 kubenswrapper[7146]: I0318 13:08:58.945035 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/82d33ac9-1471-47c5-802c-c267e7c1694f-var-lock\") pod \"installer-1-master-0\" (UID: \"82d33ac9-1471-47c5-802c-c267e7c1694f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:08:58.945268 master-0 kubenswrapper[7146]: I0318 13:08:58.945116 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82d33ac9-1471-47c5-802c-c267e7c1694f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"82d33ac9-1471-47c5-802c-c267e7c1694f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:08:59.046059 master-0 kubenswrapper[7146]: I0318 13:08:59.045980 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82d33ac9-1471-47c5-802c-c267e7c1694f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"82d33ac9-1471-47c5-802c-c267e7c1694f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:08:59.046059 master-0 kubenswrapper[7146]: I0318 13:08:59.046053 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/82d33ac9-1471-47c5-802c-c267e7c1694f-var-lock\") pod \"installer-1-master-0\" (UID: \"82d33ac9-1471-47c5-802c-c267e7c1694f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:08:59.046274 master-0 kubenswrapper[7146]: I0318 13:08:59.046087 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82d33ac9-1471-47c5-802c-c267e7c1694f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"82d33ac9-1471-47c5-802c-c267e7c1694f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:08:59.046274 master-0 kubenswrapper[7146]: I0318 13:08:59.046166 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/82d33ac9-1471-47c5-802c-c267e7c1694f-var-lock\") pod \"installer-1-master-0\" (UID: \"82d33ac9-1471-47c5-802c-c267e7c1694f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:08:59.046274 master-0 kubenswrapper[7146]: I0318 13:08:59.046223 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82d33ac9-1471-47c5-802c-c267e7c1694f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"82d33ac9-1471-47c5-802c-c267e7c1694f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:08:59.062397 master-0 kubenswrapper[7146]: I0318 13:08:59.062321 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82d33ac9-1471-47c5-802c-c267e7c1694f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"82d33ac9-1471-47c5-802c-c267e7c1694f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:08:59.129258 master-0 kubenswrapper[7146]: I0318 13:08:59.129200 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:08:59.259094 master-0 kubenswrapper[7146]: I0318 13:08:59.259056 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_91275a95-9707-4910-883a-f8e8e32bdd27/installer/0.log" Mar 18 13:08:59.259294 master-0 kubenswrapper[7146]: I0318 13:08:59.259124 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"91275a95-9707-4910-883a-f8e8e32bdd27","Type":"ContainerDied","Data":"ca2c0895f80eba43705806f043dbbe23f06c2f321083332a16dd3cad3953e421"} Mar 18 13:08:59.259294 master-0 kubenswrapper[7146]: I0318 13:08:59.259185 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:08:59.298305 master-0 kubenswrapper[7146]: I0318 13:08:59.298256 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 13:08:59.308718 master-0 kubenswrapper[7146]: I0318 13:08:59.308660 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 13:08:59.366987 master-0 kubenswrapper[7146]: I0318 13:08:59.366926 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91275a95-9707-4910-883a-f8e8e32bdd27" path="/var/lib/kubelet/pods/91275a95-9707-4910-883a-f8e8e32bdd27/volumes" Mar 18 13:08:59.903784 master-0 kubenswrapper[7146]: I0318 13:08:59.903732 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:08:59.984994 master-0 kubenswrapper[7146]: I0318 13:08:59.984963 7146 scope.go:117] "RemoveContainer" containerID="81ea2f92c5ad4885b30c35afa7735d7fdd39c6b3c6b9581f7c806f50e4fe8cf4" Mar 18 13:09:03.088803 master-0 kubenswrapper[7146]: I0318 13:09:03.088756 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 13:09:03.112548 master-0 kubenswrapper[7146]: W0318 13:09:03.112511 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd6fab6cf_3b8f_47a6_837a_319049f487e3.slice/crio-a51d0013ac9432bcbe6b6dfe803b9f4d84a0c049f224c76e5a674061ebc1d68e WatchSource:0}: Error finding container a51d0013ac9432bcbe6b6dfe803b9f4d84a0c049f224c76e5a674061ebc1d68e: Status 404 returned error can't find the container with id a51d0013ac9432bcbe6b6dfe803b9f4d84a0c049f224c76e5a674061ebc1d68e Mar 18 13:09:03.159543 master-0 kubenswrapper[7146]: I0318 13:09:03.158265 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 13:09:03.260974 master-0 kubenswrapper[7146]: I0318 13:09:03.258518 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq"] Mar 18 13:09:03.273973 master-0 kubenswrapper[7146]: I0318 13:09:03.268180 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 13:09:03.309099 master-0 kubenswrapper[7146]: I0318 13:09:03.308692 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kq2j4" event={"ID":"5e691486-8540-4b79-8eed-b0fb829071db","Type":"ContainerStarted","Data":"10a2852408049b8aa381ba7d825cf65ce7e297b885a80d21f15c67dc32de3d43"} Mar 18 13:09:03.311844 master-0 kubenswrapper[7146]: W0318 13:09:03.311792 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0e359bd_b9ff_42c3_9c2a_037ae05f41d5.slice/crio-b58767da3cbc4988be1f3bf69998566919f279318ad1a06350e9eba709e90e27 WatchSource:0}: Error finding container b58767da3cbc4988be1f3bf69998566919f279318ad1a06350e9eba709e90e27: Status 404 returned error can't find the container with id b58767da3cbc4988be1f3bf69998566919f279318ad1a06350e9eba709e90e27 Mar 18 13:09:03.336055 master-0 kubenswrapper[7146]: W0318 13:09:03.335337 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod82d33ac9_1471_47c5_802c_c267e7c1694f.slice/crio-481c7dcce61a798c3ae7174386db78f936ce6c972f60de6b89507279a1155768 WatchSource:0}: Error finding container 481c7dcce61a798c3ae7174386db78f936ce6c972f60de6b89507279a1155768: Status 404 returned error can't find the container with id 481c7dcce61a798c3ae7174386db78f936ce6c972f60de6b89507279a1155768 Mar 18 13:09:03.347966 master-0 kubenswrapper[7146]: I0318 13:09:03.341845 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"88cd8323-8857-41fe-85d4-e6064330ec71","Type":"ContainerStarted","Data":"0d931af2c5d54a586a9cb21f694a9dbf73198cb23716b2134948c1a2dbbd5bc6"} Mar 18 13:09:03.370924 master-0 kubenswrapper[7146]: I0318 13:09:03.370880 7146 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-4v84b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Mar 18 13:09:03.371025 master-0 kubenswrapper[7146]: I0318 13:09:03.370951 7146 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" podUID="330df925-8429-4b96-9bfe-caa017c21afa" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Mar 18 13:09:03.401206 master-0 kubenswrapper[7146]: I0318 13:09:03.401167 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" event={"ID":"906c0fd3-3bcd-4c6c-8505-b3517bae06b4","Type":"ContainerStarted","Data":"edd40514a1f5f31c013c470064966c977a9ede25c673b02694bc6dccf5bde6b4"} Mar 18 13:09:03.401310 master-0 kubenswrapper[7146]: I0318 13:09:03.401218 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:09:03.401310 master-0 kubenswrapper[7146]: I0318 13:09:03.401235 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" event={"ID":"330df925-8429-4b96-9bfe-caa017c21afa","Type":"ContainerStarted","Data":"620704d7c61dd7667c0b9ebbc637d5a4615acb926bb8c0bad681bcafb14bec19"} Mar 18 13:09:03.401310 master-0 kubenswrapper[7146]: I0318 13:09:03.401247 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wl929" event={"ID":"4671673d-afa0-481f-b3a2-2c2b9441b6ce","Type":"ContainerStarted","Data":"2377df4f05a24e0ad87b6e61ee4e2d7bff01893d557b107e875c64664801496c"} Mar 18 13:09:03.402530 master-0 kubenswrapper[7146]: I0318 13:09:03.402472 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" event={"ID":"1ad93612-ab12-4b30-984f-119e1b924a84","Type":"ContainerStarted","Data":"ddc5fc9bc5738b2fb623ab1efb2af56221fe48e8d53ef7d28db78ae72c1b278b"} Mar 18 13:09:03.407781 master-0 kubenswrapper[7146]: I0318 13:09:03.406893 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" event={"ID":"36db10b8-33a2-4b54-85e2-9809eb6bc37d","Type":"ContainerStarted","Data":"763c041e89e36c29391b2cb35cd74d0ff6b0e6c63f07f02d238f792452bdf127"} Mar 18 13:09:03.408729 master-0 kubenswrapper[7146]: I0318 13:09:03.408697 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:09:03.463032 master-0 kubenswrapper[7146]: I0318 13:09:03.455649 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" event={"ID":"47f82c03-65d1-4a6c-ba09-8a00ae778009","Type":"ContainerStarted","Data":"d3b18b06b1385118154ca40286eb72ce3b2ca7d19fb136078bc09404f47f6b63"} Mar 18 13:09:03.466558 master-0 kubenswrapper[7146]: I0318 13:09:03.462930 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:09:03.472614 master-0 kubenswrapper[7146]: I0318 13:09:03.472546 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" podStartSLOduration=8.890968242 podStartE2EDuration="24.472525148s" podCreationTimestamp="2026-03-18 13:08:39 +0000 UTC" firstStartedPulling="2026-03-18 13:08:47.026920302 +0000 UTC m=+35.835137663" lastFinishedPulling="2026-03-18 13:09:02.608477208 +0000 UTC m=+51.416694569" observedRunningTime="2026-03-18 13:09:03.469902624 +0000 UTC m=+52.278120005" watchObservedRunningTime="2026-03-18 13:09:03.472525148 +0000 UTC m=+52.280742519" Mar 18 13:09:03.474899 master-0 kubenswrapper[7146]: I0318 13:09:03.474869 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:09:03.490075 master-0 kubenswrapper[7146]: I0318 13:09:03.490025 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" event={"ID":"ff67e258-b085-45a1-bfdf-7a87e2c9fc74","Type":"ContainerStarted","Data":"5a8f058f1c0c44840117d3229177d71aaccc6e7598d56d0d1be72fcfe0236eae"} Mar 18 13:09:03.490232 master-0 kubenswrapper[7146]: I0318 13:09:03.490200 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" podUID="ff67e258-b085-45a1-bfdf-7a87e2c9fc74" containerName="controller-manager" containerID="cri-o://5a8f058f1c0c44840117d3229177d71aaccc6e7598d56d0d1be72fcfe0236eae" gracePeriod=30 Mar 18 13:09:03.492507 master-0 kubenswrapper[7146]: I0318 13:09:03.492384 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:09:03.514655 master-0 kubenswrapper[7146]: I0318 13:09:03.513900 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:09:03.555481 master-0 kubenswrapper[7146]: I0318 13:09:03.553620 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" event={"ID":"e4d0b174-33e4-46ee-863b-b5cc2a271b85","Type":"ContainerStarted","Data":"1b8157f4c23747a17d99cd1a75b5fd67d7d1923b9d3c78ebf701ed19d3b1c48e"} Mar 18 13:09:03.579751 master-0 kubenswrapper[7146]: I0318 13:09:03.572472 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" event={"ID":"ee1eb80b-5a76-443f-a534-54d5bdc0c98a","Type":"ContainerStarted","Data":"46f59dd21fb87db373200bebf13fee08df344992778fe047b2ba2390470ad04e"} Mar 18 13:09:03.579751 master-0 kubenswrapper[7146]: I0318 13:09:03.576811 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" podStartSLOduration=10.033979995 podStartE2EDuration="25.576770567s" podCreationTimestamp="2026-03-18 13:08:38 +0000 UTC" firstStartedPulling="2026-03-18 13:08:47.087203309 +0000 UTC m=+35.895420660" lastFinishedPulling="2026-03-18 13:09:02.629993871 +0000 UTC m=+51.438211232" observedRunningTime="2026-03-18 13:09:03.570232794 +0000 UTC m=+52.378450175" watchObservedRunningTime="2026-03-18 13:09:03.576770567 +0000 UTC m=+52.384987928" Mar 18 13:09:03.624921 master-0 kubenswrapper[7146]: I0318 13:09:03.618602 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" event={"ID":"35925474-e3fe-4cff-aad6-d853816618c7","Type":"ContainerStarted","Data":"f24cd354ec3211876dcc175f4af874dc46e7b7641542b45ac179dbf55e9e97a3"} Mar 18 13:09:03.624921 master-0 kubenswrapper[7146]: I0318 13:09:03.619562 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:09:03.632328 master-0 kubenswrapper[7146]: I0318 13:09:03.630686 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"d6fab6cf-3b8f-47a6-837a-319049f487e3","Type":"ContainerStarted","Data":"a51d0013ac9432bcbe6b6dfe803b9f4d84a0c049f224c76e5a674061ebc1d68e"} Mar 18 13:09:03.678461 master-0 kubenswrapper[7146]: I0318 13:09:03.678419 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:09:03.747590 master-0 kubenswrapper[7146]: I0318 13:09:03.747540 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" podStartSLOduration=10.747509249 podStartE2EDuration="10.747509249s" podCreationTimestamp="2026-03-18 13:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:09:03.746987874 +0000 UTC m=+52.555205245" watchObservedRunningTime="2026-03-18 13:09:03.747509249 +0000 UTC m=+52.555726600" Mar 18 13:09:04.029966 master-0 kubenswrapper[7146]: I0318 13:09:04.029421 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:09:04.137113 master-0 kubenswrapper[7146]: I0318 13:09:04.135831 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-config\") pod \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " Mar 18 13:09:04.137113 master-0 kubenswrapper[7146]: I0318 13:09:04.135906 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-serving-cert\") pod \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " Mar 18 13:09:04.137113 master-0 kubenswrapper[7146]: I0318 13:09:04.135992 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-client-ca\") pod \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " Mar 18 13:09:04.137113 master-0 kubenswrapper[7146]: I0318 13:09:04.136023 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d6w6\" (UniqueName: \"kubernetes.io/projected/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-kube-api-access-6d6w6\") pod \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " Mar 18 13:09:04.137113 master-0 kubenswrapper[7146]: I0318 13:09:04.136058 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-proxy-ca-bundles\") pod \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\" (UID: \"ff67e258-b085-45a1-bfdf-7a87e2c9fc74\") " Mar 18 13:09:04.137681 master-0 kubenswrapper[7146]: I0318 13:09:04.137133 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-client-ca" (OuterVolumeSpecName: "client-ca") pod "ff67e258-b085-45a1-bfdf-7a87e2c9fc74" (UID: "ff67e258-b085-45a1-bfdf-7a87e2c9fc74"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:09:04.137681 master-0 kubenswrapper[7146]: I0318 13:09:04.137216 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-config" (OuterVolumeSpecName: "config") pod "ff67e258-b085-45a1-bfdf-7a87e2c9fc74" (UID: "ff67e258-b085-45a1-bfdf-7a87e2c9fc74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:09:04.137681 master-0 kubenswrapper[7146]: I0318 13:09:04.137409 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ff67e258-b085-45a1-bfdf-7a87e2c9fc74" (UID: "ff67e258-b085-45a1-bfdf-7a87e2c9fc74"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:09:04.142958 master-0 kubenswrapper[7146]: I0318 13:09:04.139751 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-kube-api-access-6d6w6" (OuterVolumeSpecName: "kube-api-access-6d6w6") pod "ff67e258-b085-45a1-bfdf-7a87e2c9fc74" (UID: "ff67e258-b085-45a1-bfdf-7a87e2c9fc74"). InnerVolumeSpecName "kube-api-access-6d6w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:09:04.154481 master-0 kubenswrapper[7146]: I0318 13:09:04.153277 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ff67e258-b085-45a1-bfdf-7a87e2c9fc74" (UID: "ff67e258-b085-45a1-bfdf-7a87e2c9fc74"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:09:04.237981 master-0 kubenswrapper[7146]: I0318 13:09:04.236688 7146 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:04.237981 master-0 kubenswrapper[7146]: I0318 13:09:04.236727 7146 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:04.237981 master-0 kubenswrapper[7146]: I0318 13:09:04.236738 7146 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:04.237981 master-0 kubenswrapper[7146]: I0318 13:09:04.236749 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d6w6\" (UniqueName: \"kubernetes.io/projected/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-kube-api-access-6d6w6\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:04.237981 master-0 kubenswrapper[7146]: I0318 13:09:04.236759 7146 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff67e258-b085-45a1-bfdf-7a87e2c9fc74-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:04.467039 master-0 kubenswrapper[7146]: I0318 13:09:04.466884 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:09:04.550153 master-0 kubenswrapper[7146]: I0318 13:09:04.549319 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p7499"] Mar 18 13:09:04.550153 master-0 kubenswrapper[7146]: E0318 13:09:04.549558 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff67e258-b085-45a1-bfdf-7a87e2c9fc74" containerName="controller-manager" Mar 18 13:09:04.550153 master-0 kubenswrapper[7146]: I0318 13:09:04.549574 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff67e258-b085-45a1-bfdf-7a87e2c9fc74" containerName="controller-manager" Mar 18 13:09:04.550153 master-0 kubenswrapper[7146]: I0318 13:09:04.549679 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff67e258-b085-45a1-bfdf-7a87e2c9fc74" containerName="controller-manager" Mar 18 13:09:04.550470 master-0 kubenswrapper[7146]: I0318 13:09:04.550394 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7499" Mar 18 13:09:04.559491 master-0 kubenswrapper[7146]: I0318 13:09:04.557272 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7499"] Mar 18 13:09:04.642961 master-0 kubenswrapper[7146]: I0318 13:09:04.642890 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" event={"ID":"906c0fd3-3bcd-4c6c-8505-b3517bae06b4","Type":"ContainerStarted","Data":"cca16efb9d54bc951cd9ba818f02d1594b6f1d22556ab9b15b457bd617b1b96c"} Mar 18 13:09:04.645580 master-0 kubenswrapper[7146]: I0318 13:09:04.645538 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b282ab6f-702c-44cc-942e-f2320b61d42e-utilities\") pod \"community-operators-p7499\" (UID: \"b282ab6f-702c-44cc-942e-f2320b61d42e\") " pod="openshift-marketplace/community-operators-p7499" Mar 18 13:09:04.645580 master-0 kubenswrapper[7146]: I0318 13:09:04.645578 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b282ab6f-702c-44cc-942e-f2320b61d42e-catalog-content\") pod \"community-operators-p7499\" (UID: \"b282ab6f-702c-44cc-942e-f2320b61d42e\") " pod="openshift-marketplace/community-operators-p7499" Mar 18 13:09:04.645681 master-0 kubenswrapper[7146]: I0318 13:09:04.645636 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r69fh\" (UniqueName: \"kubernetes.io/projected/b282ab6f-702c-44cc-942e-f2320b61d42e-kube-api-access-r69fh\") pod \"community-operators-p7499\" (UID: \"b282ab6f-702c-44cc-942e-f2320b61d42e\") " pod="openshift-marketplace/community-operators-p7499" Mar 18 13:09:04.649267 master-0 kubenswrapper[7146]: I0318 13:09:04.649235 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wl929" event={"ID":"4671673d-afa0-481f-b3a2-2c2b9441b6ce","Type":"ContainerStarted","Data":"ac7b1a202c57c2d6b547982b9216e6b62c984b489cb9a208b4fa7bf75be49e49"} Mar 18 13:09:04.649502 master-0 kubenswrapper[7146]: I0318 13:09:04.649483 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wl929" Mar 18 13:09:04.652857 master-0 kubenswrapper[7146]: I0318 13:09:04.652798 7146 generic.go:334] "Generic (PLEG): container finished" podID="ff67e258-b085-45a1-bfdf-7a87e2c9fc74" containerID="5a8f058f1c0c44840117d3229177d71aaccc6e7598d56d0d1be72fcfe0236eae" exitCode=0 Mar 18 13:09:04.653200 master-0 kubenswrapper[7146]: I0318 13:09:04.653008 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" event={"ID":"ff67e258-b085-45a1-bfdf-7a87e2c9fc74","Type":"ContainerDied","Data":"5a8f058f1c0c44840117d3229177d71aaccc6e7598d56d0d1be72fcfe0236eae"} Mar 18 13:09:04.653259 master-0 kubenswrapper[7146]: I0318 13:09:04.653219 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" event={"ID":"ff67e258-b085-45a1-bfdf-7a87e2c9fc74","Type":"ContainerDied","Data":"aea171dac2cdb15de31e259a633cbe7c7b58fffdce45f886918ab6cefb487098"} Mar 18 13:09:04.653259 master-0 kubenswrapper[7146]: I0318 13:09:04.653249 7146 scope.go:117] "RemoveContainer" containerID="5a8f058f1c0c44840117d3229177d71aaccc6e7598d56d0d1be72fcfe0236eae" Mar 18 13:09:04.653607 master-0 kubenswrapper[7146]: I0318 13:09:04.653113 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5484d978b-wmp2h" Mar 18 13:09:04.656873 master-0 kubenswrapper[7146]: I0318 13:09:04.656843 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"88cd8323-8857-41fe-85d4-e6064330ec71","Type":"ContainerStarted","Data":"2930eafa2605e45a0822de041f245bf9aca0638ca211202bfcc70902ad20170b"} Mar 18 13:09:04.662126 master-0 kubenswrapper[7146]: I0318 13:09:04.662059 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"d6fab6cf-3b8f-47a6-837a-319049f487e3","Type":"ContainerStarted","Data":"cd5b538d98c70389dadc08a33490f9393909a4971cfc1d30db05717c62e0141d"} Mar 18 13:09:04.665560 master-0 kubenswrapper[7146]: I0318 13:09:04.665493 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" event={"ID":"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5","Type":"ContainerStarted","Data":"861371cd07cf9c0a4ae28a200cdeb0dec6fad29b4b4b5448a50e24d192d7c15c"} Mar 18 13:09:04.665560 master-0 kubenswrapper[7146]: I0318 13:09:04.665551 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" event={"ID":"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5","Type":"ContainerStarted","Data":"b58767da3cbc4988be1f3bf69998566919f279318ad1a06350e9eba709e90e27"} Mar 18 13:09:04.666326 master-0 kubenswrapper[7146]: I0318 13:09:04.666226 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:09:04.667790 master-0 kubenswrapper[7146]: I0318 13:09:04.667757 7146 scope.go:117] "RemoveContainer" containerID="5a8f058f1c0c44840117d3229177d71aaccc6e7598d56d0d1be72fcfe0236eae" Mar 18 13:09:04.669215 master-0 kubenswrapper[7146]: E0318 13:09:04.669182 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a8f058f1c0c44840117d3229177d71aaccc6e7598d56d0d1be72fcfe0236eae\": container with ID starting with 5a8f058f1c0c44840117d3229177d71aaccc6e7598d56d0d1be72fcfe0236eae not found: ID does not exist" containerID="5a8f058f1c0c44840117d3229177d71aaccc6e7598d56d0d1be72fcfe0236eae" Mar 18 13:09:04.669293 master-0 kubenswrapper[7146]: I0318 13:09:04.669212 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a8f058f1c0c44840117d3229177d71aaccc6e7598d56d0d1be72fcfe0236eae"} err="failed to get container status \"5a8f058f1c0c44840117d3229177d71aaccc6e7598d56d0d1be72fcfe0236eae\": rpc error: code = NotFound desc = could not find container \"5a8f058f1c0c44840117d3229177d71aaccc6e7598d56d0d1be72fcfe0236eae\": container with ID starting with 5a8f058f1c0c44840117d3229177d71aaccc6e7598d56d0d1be72fcfe0236eae not found: ID does not exist" Mar 18 13:09:04.670065 master-0 kubenswrapper[7146]: I0318 13:09:04.669966 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"82d33ac9-1471-47c5-802c-c267e7c1694f","Type":"ContainerStarted","Data":"d9d3a75725d56154d845d3eafe31cef00c186357af6963fb23afd016af24585b"} Mar 18 13:09:04.670065 master-0 kubenswrapper[7146]: I0318 13:09:04.669997 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"82d33ac9-1471-47c5-802c-c267e7c1694f","Type":"ContainerStarted","Data":"481c7dcce61a798c3ae7174386db78f936ce6c972f60de6b89507279a1155768"} Mar 18 13:09:04.687607 master-0 kubenswrapper[7146]: I0318 13:09:04.681267 7146 generic.go:334] "Generic (PLEG): container finished" podID="b41c9132-92ef-429d-bdd5-9bdb024e04fc" containerID="4cfcb6d43544aaea92892e1f33a27bf4899640538c587e1c1eacf22ba718bb42" exitCode=0 Mar 18 13:09:04.688138 master-0 kubenswrapper[7146]: I0318 13:09:04.687743 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" event={"ID":"b41c9132-92ef-429d-bdd5-9bdb024e04fc","Type":"ContainerDied","Data":"4cfcb6d43544aaea92892e1f33a27bf4899640538c587e1c1eacf22ba718bb42"} Mar 18 13:09:04.688138 master-0 kubenswrapper[7146]: I0318 13:09:04.687818 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:09:04.689510 master-0 kubenswrapper[7146]: I0318 13:09:04.689173 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wl929" podStartSLOduration=3.548832635 podStartE2EDuration="17.689160652s" podCreationTimestamp="2026-03-18 13:08:47 +0000 UTC" firstStartedPulling="2026-03-18 13:08:48.469043306 +0000 UTC m=+37.277260667" lastFinishedPulling="2026-03-18 13:09:02.609371323 +0000 UTC m=+51.417588684" observedRunningTime="2026-03-18 13:09:04.688155014 +0000 UTC m=+53.496372385" watchObservedRunningTime="2026-03-18 13:09:04.689160652 +0000 UTC m=+53.497378013" Mar 18 13:09:04.694708 master-0 kubenswrapper[7146]: I0318 13:09:04.694663 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kq2j4" event={"ID":"5e691486-8540-4b79-8eed-b0fb829071db","Type":"ContainerStarted","Data":"56714921b67c1124ac410d007a98292fa9f66875dd7d1f06919b18c5dd3f1b55"} Mar 18 13:09:04.718404 master-0 kubenswrapper[7146]: I0318 13:09:04.718163 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=10.718139274 podStartE2EDuration="10.718139274s" podCreationTimestamp="2026-03-18 13:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:09:04.705711786 +0000 UTC m=+53.513929157" watchObservedRunningTime="2026-03-18 13:09:04.718139274 +0000 UTC m=+53.526356635" Mar 18 13:09:04.746508 master-0 kubenswrapper[7146]: I0318 13:09:04.746458 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r69fh\" (UniqueName: \"kubernetes.io/projected/b282ab6f-702c-44cc-942e-f2320b61d42e-kube-api-access-r69fh\") pod \"community-operators-p7499\" (UID: \"b282ab6f-702c-44cc-942e-f2320b61d42e\") " pod="openshift-marketplace/community-operators-p7499" Mar 18 13:09:04.746913 master-0 kubenswrapper[7146]: I0318 13:09:04.746557 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b282ab6f-702c-44cc-942e-f2320b61d42e-utilities\") pod \"community-operators-p7499\" (UID: \"b282ab6f-702c-44cc-942e-f2320b61d42e\") " pod="openshift-marketplace/community-operators-p7499" Mar 18 13:09:04.746913 master-0 kubenswrapper[7146]: I0318 13:09:04.746574 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b282ab6f-702c-44cc-942e-f2320b61d42e-catalog-content\") pod \"community-operators-p7499\" (UID: \"b282ab6f-702c-44cc-942e-f2320b61d42e\") " pod="openshift-marketplace/community-operators-p7499" Mar 18 13:09:04.752449 master-0 kubenswrapper[7146]: I0318 13:09:04.752381 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b282ab6f-702c-44cc-942e-f2320b61d42e-utilities\") pod \"community-operators-p7499\" (UID: \"b282ab6f-702c-44cc-942e-f2320b61d42e\") " pod="openshift-marketplace/community-operators-p7499" Mar 18 13:09:04.758041 master-0 kubenswrapper[7146]: I0318 13:09:04.757617 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b282ab6f-702c-44cc-942e-f2320b61d42e-catalog-content\") pod \"community-operators-p7499\" (UID: \"b282ab6f-702c-44cc-942e-f2320b61d42e\") " pod="openshift-marketplace/community-operators-p7499" Mar 18 13:09:04.772036 master-0 kubenswrapper[7146]: I0318 13:09:04.771926 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r69fh\" (UniqueName: \"kubernetes.io/projected/b282ab6f-702c-44cc-942e-f2320b61d42e-kube-api-access-r69fh\") pod \"community-operators-p7499\" (UID: \"b282ab6f-702c-44cc-942e-f2320b61d42e\") " pod="openshift-marketplace/community-operators-p7499" Mar 18 13:09:04.784401 master-0 kubenswrapper[7146]: I0318 13:09:04.782469 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 13:09:04.795527 master-0 kubenswrapper[7146]: I0318 13:09:04.795456 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7gwnt"] Mar 18 13:09:04.797110 master-0 kubenswrapper[7146]: I0318 13:09:04.796411 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gwnt" Mar 18 13:09:04.848966 master-0 kubenswrapper[7146]: I0318 13:09:04.848049 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" podStartSLOduration=17.848024691 podStartE2EDuration="17.848024691s" podCreationTimestamp="2026-03-18 13:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:09:04.812390033 +0000 UTC m=+53.620607394" watchObservedRunningTime="2026-03-18 13:09:04.848024691 +0000 UTC m=+53.656242072" Mar 18 13:09:04.852967 master-0 kubenswrapper[7146]: I0318 13:09:04.851423 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1afcb319-16c7-4f27-9db8-21b105a1bdba-catalog-content\") pod \"redhat-marketplace-7gwnt\" (UID: \"1afcb319-16c7-4f27-9db8-21b105a1bdba\") " pod="openshift-marketplace/redhat-marketplace-7gwnt" Mar 18 13:09:04.852967 master-0 kubenswrapper[7146]: I0318 13:09:04.851491 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z2p8\" (UniqueName: \"kubernetes.io/projected/1afcb319-16c7-4f27-9db8-21b105a1bdba-kube-api-access-8z2p8\") pod \"redhat-marketplace-7gwnt\" (UID: \"1afcb319-16c7-4f27-9db8-21b105a1bdba\") " pod="openshift-marketplace/redhat-marketplace-7gwnt" Mar 18 13:09:04.852967 master-0 kubenswrapper[7146]: I0318 13:09:04.851509 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1afcb319-16c7-4f27-9db8-21b105a1bdba-utilities\") pod \"redhat-marketplace-7gwnt\" (UID: \"1afcb319-16c7-4f27-9db8-21b105a1bdba\") " pod="openshift-marketplace/redhat-marketplace-7gwnt" Mar 18 13:09:04.873963 master-0 kubenswrapper[7146]: I0318 13:09:04.868557 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7499" Mar 18 13:09:04.875986 master-0 kubenswrapper[7146]: I0318 13:09:04.875572 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gwnt"] Mar 18 13:09:04.919964 master-0 kubenswrapper[7146]: I0318 13:09:04.903159 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=6.903131025 podStartE2EDuration="6.903131025s" podCreationTimestamp="2026-03-18 13:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:09:04.897199399 +0000 UTC m=+53.705416760" watchObservedRunningTime="2026-03-18 13:09:04.903131025 +0000 UTC m=+53.711348406" Mar 18 13:09:04.961963 master-0 kubenswrapper[7146]: I0318 13:09:04.952078 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1afcb319-16c7-4f27-9db8-21b105a1bdba-catalog-content\") pod \"redhat-marketplace-7gwnt\" (UID: \"1afcb319-16c7-4f27-9db8-21b105a1bdba\") " pod="openshift-marketplace/redhat-marketplace-7gwnt" Mar 18 13:09:04.961963 master-0 kubenswrapper[7146]: I0318 13:09:04.952135 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z2p8\" (UniqueName: \"kubernetes.io/projected/1afcb319-16c7-4f27-9db8-21b105a1bdba-kube-api-access-8z2p8\") pod \"redhat-marketplace-7gwnt\" (UID: \"1afcb319-16c7-4f27-9db8-21b105a1bdba\") " pod="openshift-marketplace/redhat-marketplace-7gwnt" Mar 18 13:09:04.961963 master-0 kubenswrapper[7146]: I0318 13:09:04.952159 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1afcb319-16c7-4f27-9db8-21b105a1bdba-utilities\") pod \"redhat-marketplace-7gwnt\" (UID: \"1afcb319-16c7-4f27-9db8-21b105a1bdba\") " pod="openshift-marketplace/redhat-marketplace-7gwnt" Mar 18 13:09:04.961963 master-0 kubenswrapper[7146]: I0318 13:09:04.953264 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1afcb319-16c7-4f27-9db8-21b105a1bdba-catalog-content\") pod \"redhat-marketplace-7gwnt\" (UID: \"1afcb319-16c7-4f27-9db8-21b105a1bdba\") " pod="openshift-marketplace/redhat-marketplace-7gwnt" Mar 18 13:09:04.961963 master-0 kubenswrapper[7146]: I0318 13:09:04.954893 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1afcb319-16c7-4f27-9db8-21b105a1bdba-utilities\") pod \"redhat-marketplace-7gwnt\" (UID: \"1afcb319-16c7-4f27-9db8-21b105a1bdba\") " pod="openshift-marketplace/redhat-marketplace-7gwnt" Mar 18 13:09:04.980204 master-0 kubenswrapper[7146]: I0318 13:09:04.977782 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=8.977761795 podStartE2EDuration="8.977761795s" podCreationTimestamp="2026-03-18 13:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:09:04.973052123 +0000 UTC m=+53.781269494" watchObservedRunningTime="2026-03-18 13:09:04.977761795 +0000 UTC m=+53.785979156" Mar 18 13:09:05.002379 master-0 kubenswrapper[7146]: I0318 13:09:04.999277 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z2p8\" (UniqueName: \"kubernetes.io/projected/1afcb319-16c7-4f27-9db8-21b105a1bdba-kube-api-access-8z2p8\") pod \"redhat-marketplace-7gwnt\" (UID: \"1afcb319-16c7-4f27-9db8-21b105a1bdba\") " pod="openshift-marketplace/redhat-marketplace-7gwnt" Mar 18 13:09:05.026155 master-0 kubenswrapper[7146]: I0318 13:09:05.019395 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5484d978b-wmp2h"] Mar 18 13:09:05.026155 master-0 kubenswrapper[7146]: I0318 13:09:05.021073 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5484d978b-wmp2h"] Mar 18 13:09:05.266599 master-0 kubenswrapper[7146]: I0318 13:09:05.266482 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gwnt" Mar 18 13:09:05.347961 master-0 kubenswrapper[7146]: I0318 13:09:05.347904 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7499"] Mar 18 13:09:05.372234 master-0 kubenswrapper[7146]: I0318 13:09:05.372042 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff67e258-b085-45a1-bfdf-7a87e2c9fc74" path="/var/lib/kubelet/pods/ff67e258-b085-45a1-bfdf-7a87e2c9fc74/volumes" Mar 18 13:09:05.710824 master-0 kubenswrapper[7146]: I0318 13:09:05.710746 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" event={"ID":"b41c9132-92ef-429d-bdd5-9bdb024e04fc","Type":"ContainerStarted","Data":"0d91c42c4181995063362022f0e79ce9e2cfdde6f2734cf785effebf13b2eb05"} Mar 18 13:09:05.710824 master-0 kubenswrapper[7146]: I0318 13:09:05.710789 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" event={"ID":"b41c9132-92ef-429d-bdd5-9bdb024e04fc","Type":"ContainerStarted","Data":"d9f60501fb04ba6aca9c4ad2529d4ad6704543c84c147f34635b5f06ee424977"} Mar 18 13:09:05.716336 master-0 kubenswrapper[7146]: I0318 13:09:05.714018 7146 generic.go:334] "Generic (PLEG): container finished" podID="b282ab6f-702c-44cc-942e-f2320b61d42e" containerID="1796a0b0f16eab9307a88efa9cbf5a005462ea8deb7aaed595c97e7cc944c205" exitCode=0 Mar 18 13:09:05.716336 master-0 kubenswrapper[7146]: I0318 13:09:05.715193 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7499" event={"ID":"b282ab6f-702c-44cc-942e-f2320b61d42e","Type":"ContainerDied","Data":"1796a0b0f16eab9307a88efa9cbf5a005462ea8deb7aaed595c97e7cc944c205"} Mar 18 13:09:05.716336 master-0 kubenswrapper[7146]: I0318 13:09:05.715245 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7499" event={"ID":"b282ab6f-702c-44cc-942e-f2320b61d42e","Type":"ContainerStarted","Data":"1b8e47c9b17efae6a6cc0dbeb65d00fae0910922cb941a8ca1e5a3ea502f8b3f"} Mar 18 13:09:05.732194 master-0 kubenswrapper[7146]: I0318 13:09:05.732148 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gwnt"] Mar 18 13:09:05.739959 master-0 kubenswrapper[7146]: I0318 13:09:05.739881 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" podStartSLOduration=7.0353512 podStartE2EDuration="22.739864178s" podCreationTimestamp="2026-03-18 13:08:43 +0000 UTC" firstStartedPulling="2026-03-18 13:08:46.946404329 +0000 UTC m=+35.754621700" lastFinishedPulling="2026-03-18 13:09:02.650917317 +0000 UTC m=+51.459134678" observedRunningTime="2026-03-18 13:09:05.739432266 +0000 UTC m=+54.547649647" watchObservedRunningTime="2026-03-18 13:09:05.739864178 +0000 UTC m=+54.548081539" Mar 18 13:09:05.743596 master-0 kubenswrapper[7146]: W0318 13:09:05.743536 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1afcb319_16c7_4f27_9db8_21b105a1bdba.slice/crio-4411dce91ebbb16615b8e509124d82cbd8fb2e5c4cdd14d9d48b5dd2c475d27f WatchSource:0}: Error finding container 4411dce91ebbb16615b8e509124d82cbd8fb2e5c4cdd14d9d48b5dd2c475d27f: Status 404 returned error can't find the container with id 4411dce91ebbb16615b8e509124d82cbd8fb2e5c4cdd14d9d48b5dd2c475d27f Mar 18 13:09:05.771757 master-0 kubenswrapper[7146]: I0318 13:09:05.770712 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b977b9447-ssl9l"] Mar 18 13:09:05.771984 master-0 kubenswrapper[7146]: I0318 13:09:05.771879 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:05.774475 master-0 kubenswrapper[7146]: I0318 13:09:05.774416 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:09:05.774933 master-0 kubenswrapper[7146]: I0318 13:09:05.774911 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:09:05.778479 master-0 kubenswrapper[7146]: I0318 13:09:05.778410 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:09:05.778761 master-0 kubenswrapper[7146]: I0318 13:09:05.778731 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:09:05.779076 master-0 kubenswrapper[7146]: I0318 13:09:05.779058 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:09:05.781652 master-0 kubenswrapper[7146]: I0318 13:09:05.781607 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b977b9447-ssl9l"] Mar 18 13:09:05.809454 master-0 kubenswrapper[7146]: I0318 13:09:05.809395 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:09:05.867754 master-0 kubenswrapper[7146]: I0318 13:09:05.867689 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t84qx\" (UniqueName: \"kubernetes.io/projected/b33c6618-ccab-4d62-ab77-1200a2b6f389-kube-api-access-t84qx\") pod \"controller-manager-b977b9447-ssl9l\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:05.867754 master-0 kubenswrapper[7146]: I0318 13:09:05.867749 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-proxy-ca-bundles\") pod \"controller-manager-b977b9447-ssl9l\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:05.867992 master-0 kubenswrapper[7146]: I0318 13:09:05.867806 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-config\") pod \"controller-manager-b977b9447-ssl9l\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:05.867992 master-0 kubenswrapper[7146]: I0318 13:09:05.867878 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b33c6618-ccab-4d62-ab77-1200a2b6f389-serving-cert\") pod \"controller-manager-b977b9447-ssl9l\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:05.867992 master-0 kubenswrapper[7146]: I0318 13:09:05.867893 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-client-ca\") pod \"controller-manager-b977b9447-ssl9l\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:05.969184 master-0 kubenswrapper[7146]: I0318 13:09:05.968815 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t84qx\" (UniqueName: \"kubernetes.io/projected/b33c6618-ccab-4d62-ab77-1200a2b6f389-kube-api-access-t84qx\") pod \"controller-manager-b977b9447-ssl9l\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:05.969184 master-0 kubenswrapper[7146]: I0318 13:09:05.968861 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-proxy-ca-bundles\") pod \"controller-manager-b977b9447-ssl9l\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:05.969184 master-0 kubenswrapper[7146]: I0318 13:09:05.968892 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-config\") pod \"controller-manager-b977b9447-ssl9l\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:05.971579 master-0 kubenswrapper[7146]: I0318 13:09:05.969565 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b33c6618-ccab-4d62-ab77-1200a2b6f389-serving-cert\") pod \"controller-manager-b977b9447-ssl9l\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:05.971579 master-0 kubenswrapper[7146]: I0318 13:09:05.969631 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-client-ca\") pod \"controller-manager-b977b9447-ssl9l\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:05.971579 master-0 kubenswrapper[7146]: I0318 13:09:05.971518 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-config\") pod \"controller-manager-b977b9447-ssl9l\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:05.971579 master-0 kubenswrapper[7146]: I0318 13:09:05.971529 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-proxy-ca-bundles\") pod \"controller-manager-b977b9447-ssl9l\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:05.972329 master-0 kubenswrapper[7146]: I0318 13:09:05.972295 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-client-ca\") pod \"controller-manager-b977b9447-ssl9l\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:05.974097 master-0 kubenswrapper[7146]: I0318 13:09:05.974054 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b33c6618-ccab-4d62-ab77-1200a2b6f389-serving-cert\") pod \"controller-manager-b977b9447-ssl9l\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:05.997069 master-0 kubenswrapper[7146]: I0318 13:09:05.996411 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s6vkz"] Mar 18 13:09:05.997307 master-0 kubenswrapper[7146]: I0318 13:09:05.997279 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6vkz" Mar 18 13:09:06.001672 master-0 kubenswrapper[7146]: I0318 13:09:06.001574 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm"] Mar 18 13:09:06.002176 master-0 kubenswrapper[7146]: I0318 13:09:06.002156 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:09:06.006029 master-0 kubenswrapper[7146]: I0318 13:09:06.004714 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 13:09:06.009870 master-0 kubenswrapper[7146]: I0318 13:09:06.009807 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t84qx\" (UniqueName: \"kubernetes.io/projected/b33c6618-ccab-4d62-ab77-1200a2b6f389-kube-api-access-t84qx\") pod \"controller-manager-b977b9447-ssl9l\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:06.013330 master-0 kubenswrapper[7146]: I0318 13:09:06.013284 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm"] Mar 18 13:09:06.021315 master-0 kubenswrapper[7146]: I0318 13:09:06.019017 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s6vkz"] Mar 18 13:09:06.107692 master-0 kubenswrapper[7146]: I0318 13:09:06.105628 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:06.172339 master-0 kubenswrapper[7146]: I0318 13:09:06.172263 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/375d5112-d2be-47cf-bee1-82614ba71ed8-webhook-cert\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:09:06.172339 master-0 kubenswrapper[7146]: I0318 13:09:06.172338 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4dcj\" (UniqueName: \"kubernetes.io/projected/375d5112-d2be-47cf-bee1-82614ba71ed8-kube-api-access-d4dcj\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:09:06.172541 master-0 kubenswrapper[7146]: I0318 13:09:06.172368 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrzkc\" (UniqueName: \"kubernetes.io/projected/aeed4251-c92a-49e9-a785-9903d84ca0d6-kube-api-access-hrzkc\") pod \"redhat-operators-s6vkz\" (UID: \"aeed4251-c92a-49e9-a785-9903d84ca0d6\") " pod="openshift-marketplace/redhat-operators-s6vkz" Mar 18 13:09:06.172541 master-0 kubenswrapper[7146]: I0318 13:09:06.172392 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/375d5112-d2be-47cf-bee1-82614ba71ed8-apiservice-cert\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:09:06.172541 master-0 kubenswrapper[7146]: I0318 13:09:06.172415 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeed4251-c92a-49e9-a785-9903d84ca0d6-utilities\") pod \"redhat-operators-s6vkz\" (UID: \"aeed4251-c92a-49e9-a785-9903d84ca0d6\") " pod="openshift-marketplace/redhat-operators-s6vkz" Mar 18 13:09:06.172541 master-0 kubenswrapper[7146]: I0318 13:09:06.172443 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeed4251-c92a-49e9-a785-9903d84ca0d6-catalog-content\") pod \"redhat-operators-s6vkz\" (UID: \"aeed4251-c92a-49e9-a785-9903d84ca0d6\") " pod="openshift-marketplace/redhat-operators-s6vkz" Mar 18 13:09:06.172541 master-0 kubenswrapper[7146]: I0318 13:09:06.172469 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/375d5112-d2be-47cf-bee1-82614ba71ed8-tmpfs\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:09:06.273709 master-0 kubenswrapper[7146]: I0318 13:09:06.273665 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4dcj\" (UniqueName: \"kubernetes.io/projected/375d5112-d2be-47cf-bee1-82614ba71ed8-kube-api-access-d4dcj\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:09:06.273709 master-0 kubenswrapper[7146]: I0318 13:09:06.273711 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrzkc\" (UniqueName: \"kubernetes.io/projected/aeed4251-c92a-49e9-a785-9903d84ca0d6-kube-api-access-hrzkc\") pod \"redhat-operators-s6vkz\" (UID: \"aeed4251-c92a-49e9-a785-9903d84ca0d6\") " pod="openshift-marketplace/redhat-operators-s6vkz" Mar 18 13:09:06.275511 master-0 kubenswrapper[7146]: I0318 13:09:06.273738 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/375d5112-d2be-47cf-bee1-82614ba71ed8-apiservice-cert\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:09:06.275511 master-0 kubenswrapper[7146]: I0318 13:09:06.273762 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeed4251-c92a-49e9-a785-9903d84ca0d6-utilities\") pod \"redhat-operators-s6vkz\" (UID: \"aeed4251-c92a-49e9-a785-9903d84ca0d6\") " pod="openshift-marketplace/redhat-operators-s6vkz" Mar 18 13:09:06.275511 master-0 kubenswrapper[7146]: I0318 13:09:06.273782 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeed4251-c92a-49e9-a785-9903d84ca0d6-catalog-content\") pod \"redhat-operators-s6vkz\" (UID: \"aeed4251-c92a-49e9-a785-9903d84ca0d6\") " pod="openshift-marketplace/redhat-operators-s6vkz" Mar 18 13:09:06.275511 master-0 kubenswrapper[7146]: I0318 13:09:06.273804 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/375d5112-d2be-47cf-bee1-82614ba71ed8-tmpfs\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:09:06.275511 master-0 kubenswrapper[7146]: I0318 13:09:06.273923 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/375d5112-d2be-47cf-bee1-82614ba71ed8-webhook-cert\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:09:06.275511 master-0 kubenswrapper[7146]: I0318 13:09:06.274510 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeed4251-c92a-49e9-a785-9903d84ca0d6-utilities\") pod \"redhat-operators-s6vkz\" (UID: \"aeed4251-c92a-49e9-a785-9903d84ca0d6\") " pod="openshift-marketplace/redhat-operators-s6vkz" Mar 18 13:09:06.275511 master-0 kubenswrapper[7146]: I0318 13:09:06.274772 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeed4251-c92a-49e9-a785-9903d84ca0d6-catalog-content\") pod \"redhat-operators-s6vkz\" (UID: \"aeed4251-c92a-49e9-a785-9903d84ca0d6\") " pod="openshift-marketplace/redhat-operators-s6vkz" Mar 18 13:09:06.278063 master-0 kubenswrapper[7146]: I0318 13:09:06.278022 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/375d5112-d2be-47cf-bee1-82614ba71ed8-apiservice-cert\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:09:06.280378 master-0 kubenswrapper[7146]: I0318 13:09:06.278718 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/375d5112-d2be-47cf-bee1-82614ba71ed8-tmpfs\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:09:06.295607 master-0 kubenswrapper[7146]: I0318 13:09:06.295565 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrzkc\" (UniqueName: \"kubernetes.io/projected/aeed4251-c92a-49e9-a785-9903d84ca0d6-kube-api-access-hrzkc\") pod \"redhat-operators-s6vkz\" (UID: \"aeed4251-c92a-49e9-a785-9903d84ca0d6\") " pod="openshift-marketplace/redhat-operators-s6vkz" Mar 18 13:09:06.295745 master-0 kubenswrapper[7146]: I0318 13:09:06.295717 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4dcj\" (UniqueName: \"kubernetes.io/projected/375d5112-d2be-47cf-bee1-82614ba71ed8-kube-api-access-d4dcj\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:09:06.295780 master-0 kubenswrapper[7146]: I0318 13:09:06.295745 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/375d5112-d2be-47cf-bee1-82614ba71ed8-webhook-cert\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:09:06.338371 master-0 kubenswrapper[7146]: I0318 13:09:06.338322 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6vkz" Mar 18 13:09:06.355710 master-0 kubenswrapper[7146]: I0318 13:09:06.352209 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:09:06.517107 master-0 kubenswrapper[7146]: I0318 13:09:06.516667 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b977b9447-ssl9l"] Mar 18 13:09:06.535487 master-0 kubenswrapper[7146]: W0318 13:09:06.535437 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb33c6618_ccab_4d62_ab77_1200a2b6f389.slice/crio-5eb68d4611fcf7db1c5bff2edfaefdda2e609086c1f702c24d7b83aa3e8009de WatchSource:0}: Error finding container 5eb68d4611fcf7db1c5bff2edfaefdda2e609086c1f702c24d7b83aa3e8009de: Status 404 returned error can't find the container with id 5eb68d4611fcf7db1c5bff2edfaefdda2e609086c1f702c24d7b83aa3e8009de Mar 18 13:09:06.719331 master-0 kubenswrapper[7146]: I0318 13:09:06.718861 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" event={"ID":"b33c6618-ccab-4d62-ab77-1200a2b6f389","Type":"ContainerStarted","Data":"d6b517c993fe7a12be7fb026e3f84c251058faaef715d968729966f3a54737f4"} Mar 18 13:09:06.719331 master-0 kubenswrapper[7146]: I0318 13:09:06.718905 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" event={"ID":"b33c6618-ccab-4d62-ab77-1200a2b6f389","Type":"ContainerStarted","Data":"5eb68d4611fcf7db1c5bff2edfaefdda2e609086c1f702c24d7b83aa3e8009de"} Mar 18 13:09:06.722011 master-0 kubenswrapper[7146]: I0318 13:09:06.720763 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:06.723440 master-0 kubenswrapper[7146]: I0318 13:09:06.722389 7146 generic.go:334] "Generic (PLEG): container finished" podID="1afcb319-16c7-4f27-9db8-21b105a1bdba" containerID="1013e7a56d2a54fa4a4d96325895916a4829a0189fbb58f24bbcd3715bd15778" exitCode=0 Mar 18 13:09:06.723440 master-0 kubenswrapper[7146]: I0318 13:09:06.722820 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gwnt" event={"ID":"1afcb319-16c7-4f27-9db8-21b105a1bdba","Type":"ContainerDied","Data":"1013e7a56d2a54fa4a4d96325895916a4829a0189fbb58f24bbcd3715bd15778"} Mar 18 13:09:06.723440 master-0 kubenswrapper[7146]: I0318 13:09:06.722883 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gwnt" event={"ID":"1afcb319-16c7-4f27-9db8-21b105a1bdba","Type":"ContainerStarted","Data":"4411dce91ebbb16615b8e509124d82cbd8fb2e5c4cdd14d9d48b5dd2c475d27f"} Mar 18 13:09:06.723440 master-0 kubenswrapper[7146]: I0318 13:09:06.722971 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-0" podUID="d6fab6cf-3b8f-47a6-837a-319049f487e3" containerName="installer" containerID="cri-o://cd5b538d98c70389dadc08a33490f9393909a4971cfc1d30db05717c62e0141d" gracePeriod=30 Mar 18 13:09:06.724197 master-0 kubenswrapper[7146]: I0318 13:09:06.724099 7146 patch_prober.go:28] interesting pod/controller-manager-b977b9447-ssl9l container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Mar 18 13:09:06.724197 master-0 kubenswrapper[7146]: I0318 13:09:06.724157 7146 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" podUID="b33c6618-ccab-4d62-ab77-1200a2b6f389" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Mar 18 13:09:06.768854 master-0 kubenswrapper[7146]: I0318 13:09:06.768058 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" podStartSLOduration=19.768034924 podStartE2EDuration="19.768034924s" podCreationTimestamp="2026-03-18 13:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:09:06.743527978 +0000 UTC m=+55.551745339" watchObservedRunningTime="2026-03-18 13:09:06.768034924 +0000 UTC m=+55.576252285" Mar 18 13:09:06.789906 master-0 kubenswrapper[7146]: I0318 13:09:06.789709 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s6vkz"] Mar 18 13:09:06.796153 master-0 kubenswrapper[7146]: W0318 13:09:06.796099 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaeed4251_c92a_49e9_a785_9903d84ca0d6.slice/crio-d34f95855ed1c06ff9e9c9318208614cb98f25ab4499c2a654f96d13704f90e3 WatchSource:0}: Error finding container d34f95855ed1c06ff9e9c9318208614cb98f25ab4499c2a654f96d13704f90e3: Status 404 returned error can't find the container with id d34f95855ed1c06ff9e9c9318208614cb98f25ab4499c2a654f96d13704f90e3 Mar 18 13:09:06.848205 master-0 kubenswrapper[7146]: I0318 13:09:06.848142 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm"] Mar 18 13:09:07.000264 master-0 kubenswrapper[7146]: I0318 13:09:06.998313 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b977b9447-ssl9l"] Mar 18 13:09:07.034805 master-0 kubenswrapper[7146]: I0318 13:09:07.033199 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq"] Mar 18 13:09:07.171963 master-0 kubenswrapper[7146]: I0318 13:09:07.171600 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 13:09:07.189335 master-0 kubenswrapper[7146]: I0318 13:09:07.172249 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:09:07.202658 master-0 kubenswrapper[7146]: I0318 13:09:07.201585 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-srjhk"] Mar 18 13:09:07.221436 master-0 kubenswrapper[7146]: I0318 13:09:07.211684 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srjhk" Mar 18 13:09:07.221436 master-0 kubenswrapper[7146]: I0318 13:09:07.214631 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 13:09:07.237050 master-0 kubenswrapper[7146]: I0318 13:09:07.237004 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-srjhk"] Mar 18 13:09:07.306031 master-0 kubenswrapper[7146]: I0318 13:09:07.301561 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/245f3af1-ccfb-4191-9a34-00852e52a73d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"245f3af1-ccfb-4191-9a34-00852e52a73d\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:09:07.306031 master-0 kubenswrapper[7146]: I0318 13:09:07.301674 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x84xd\" (UniqueName: \"kubernetes.io/projected/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-kube-api-access-x84xd\") pod \"certified-operators-srjhk\" (UID: \"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a\") " pod="openshift-marketplace/certified-operators-srjhk" Mar 18 13:09:07.306031 master-0 kubenswrapper[7146]: I0318 13:09:07.301718 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/245f3af1-ccfb-4191-9a34-00852e52a73d-var-lock\") pod \"installer-4-master-0\" (UID: \"245f3af1-ccfb-4191-9a34-00852e52a73d\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:09:07.306031 master-0 kubenswrapper[7146]: I0318 13:09:07.301745 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-utilities\") pod \"certified-operators-srjhk\" (UID: \"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a\") " pod="openshift-marketplace/certified-operators-srjhk" Mar 18 13:09:07.306031 master-0 kubenswrapper[7146]: I0318 13:09:07.301790 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-catalog-content\") pod \"certified-operators-srjhk\" (UID: \"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a\") " pod="openshift-marketplace/certified-operators-srjhk" Mar 18 13:09:07.306031 master-0 kubenswrapper[7146]: I0318 13:09:07.301814 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/245f3af1-ccfb-4191-9a34-00852e52a73d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"245f3af1-ccfb-4191-9a34-00852e52a73d\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:09:07.403438 master-0 kubenswrapper[7146]: I0318 13:09:07.403156 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/245f3af1-ccfb-4191-9a34-00852e52a73d-var-lock\") pod \"installer-4-master-0\" (UID: \"245f3af1-ccfb-4191-9a34-00852e52a73d\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:09:07.403438 master-0 kubenswrapper[7146]: I0318 13:09:07.403206 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-utilities\") pod \"certified-operators-srjhk\" (UID: \"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a\") " pod="openshift-marketplace/certified-operators-srjhk" Mar 18 13:09:07.403438 master-0 kubenswrapper[7146]: I0318 13:09:07.403232 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-catalog-content\") pod \"certified-operators-srjhk\" (UID: \"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a\") " pod="openshift-marketplace/certified-operators-srjhk" Mar 18 13:09:07.403438 master-0 kubenswrapper[7146]: I0318 13:09:07.403250 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/245f3af1-ccfb-4191-9a34-00852e52a73d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"245f3af1-ccfb-4191-9a34-00852e52a73d\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:09:07.403438 master-0 kubenswrapper[7146]: I0318 13:09:07.403269 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/245f3af1-ccfb-4191-9a34-00852e52a73d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"245f3af1-ccfb-4191-9a34-00852e52a73d\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:09:07.403438 master-0 kubenswrapper[7146]: I0318 13:09:07.403322 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x84xd\" (UniqueName: \"kubernetes.io/projected/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-kube-api-access-x84xd\") pod \"certified-operators-srjhk\" (UID: \"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a\") " pod="openshift-marketplace/certified-operators-srjhk" Mar 18 13:09:07.403758 master-0 kubenswrapper[7146]: I0318 13:09:07.403629 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/245f3af1-ccfb-4191-9a34-00852e52a73d-var-lock\") pod \"installer-4-master-0\" (UID: \"245f3af1-ccfb-4191-9a34-00852e52a73d\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:09:07.405742 master-0 kubenswrapper[7146]: I0318 13:09:07.405723 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-utilities\") pod \"certified-operators-srjhk\" (UID: \"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a\") " pod="openshift-marketplace/certified-operators-srjhk" Mar 18 13:09:07.406051 master-0 kubenswrapper[7146]: I0318 13:09:07.406010 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-catalog-content\") pod \"certified-operators-srjhk\" (UID: \"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a\") " pod="openshift-marketplace/certified-operators-srjhk" Mar 18 13:09:07.406234 master-0 kubenswrapper[7146]: I0318 13:09:07.406216 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/245f3af1-ccfb-4191-9a34-00852e52a73d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"245f3af1-ccfb-4191-9a34-00852e52a73d\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:09:07.439038 master-0 kubenswrapper[7146]: I0318 13:09:07.438840 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/245f3af1-ccfb-4191-9a34-00852e52a73d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"245f3af1-ccfb-4191-9a34-00852e52a73d\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:09:07.456333 master-0 kubenswrapper[7146]: I0318 13:09:07.456298 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x84xd\" (UniqueName: \"kubernetes.io/projected/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-kube-api-access-x84xd\") pod \"certified-operators-srjhk\" (UID: \"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a\") " pod="openshift-marketplace/certified-operators-srjhk" Mar 18 13:09:07.504754 master-0 kubenswrapper[7146]: I0318 13:09:07.504697 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:09:07.563444 master-0 kubenswrapper[7146]: I0318 13:09:07.560159 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srjhk" Mar 18 13:09:07.728703 master-0 kubenswrapper[7146]: I0318 13:09:07.728448 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" event={"ID":"375d5112-d2be-47cf-bee1-82614ba71ed8","Type":"ContainerStarted","Data":"ecb6b64ebdf02333f1dcfb5cd1484cb4afcf14551f6b8b3346d5180e3339b628"} Mar 18 13:09:07.728703 master-0 kubenswrapper[7146]: I0318 13:09:07.728496 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" event={"ID":"375d5112-d2be-47cf-bee1-82614ba71ed8","Type":"ContainerStarted","Data":"f66902e008f5e3816231ec2d4e1a0e85eeb3453ed6e4f6ce1b4d241b3bf8e3ac"} Mar 18 13:09:07.729052 master-0 kubenswrapper[7146]: I0318 13:09:07.729004 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:09:07.732200 master-0 kubenswrapper[7146]: I0318 13:09:07.732168 7146 generic.go:334] "Generic (PLEG): container finished" podID="aeed4251-c92a-49e9-a785-9903d84ca0d6" containerID="d16347f3e9ba2d5bd6ec0c2072d5dd188dabafd7872c967eafdae811def53a67" exitCode=0 Mar 18 13:09:07.732319 master-0 kubenswrapper[7146]: I0318 13:09:07.732273 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6vkz" event={"ID":"aeed4251-c92a-49e9-a785-9903d84ca0d6","Type":"ContainerDied","Data":"d16347f3e9ba2d5bd6ec0c2072d5dd188dabafd7872c967eafdae811def53a67"} Mar 18 13:09:07.732384 master-0 kubenswrapper[7146]: I0318 13:09:07.732331 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6vkz" event={"ID":"aeed4251-c92a-49e9-a785-9903d84ca0d6","Type":"ContainerStarted","Data":"d34f95855ed1c06ff9e9c9318208614cb98f25ab4499c2a654f96d13704f90e3"} Mar 18 13:09:07.732738 master-0 kubenswrapper[7146]: I0318 13:09:07.732691 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" podUID="b0e359bd-b9ff-42c3-9c2a-037ae05f41d5" containerName="route-controller-manager" containerID="cri-o://861371cd07cf9c0a4ae28a200cdeb0dec6fad29b4b4b5448a50e24d192d7c15c" gracePeriod=30 Mar 18 13:09:07.736608 master-0 kubenswrapper[7146]: I0318 13:09:07.736560 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:08.728834 master-0 kubenswrapper[7146]: I0318 13:09:08.728748 7146 patch_prober.go:28] interesting pod/packageserver-5dccbdd8cc-pw7vm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.53:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:09:08.729384 master-0 kubenswrapper[7146]: I0318 13:09:08.728841 7146 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" podUID="375d5112-d2be-47cf-bee1-82614ba71ed8" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.53:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:09:08.742076 master-0 kubenswrapper[7146]: I0318 13:09:08.742040 7146 generic.go:334] "Generic (PLEG): container finished" podID="b0e359bd-b9ff-42c3-9c2a-037ae05f41d5" containerID="861371cd07cf9c0a4ae28a200cdeb0dec6fad29b4b4b5448a50e24d192d7c15c" exitCode=0 Mar 18 13:09:08.742353 master-0 kubenswrapper[7146]: I0318 13:09:08.742286 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" event={"ID":"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5","Type":"ContainerDied","Data":"861371cd07cf9c0a4ae28a200cdeb0dec6fad29b4b4b5448a50e24d192d7c15c"} Mar 18 13:09:08.742489 master-0 kubenswrapper[7146]: I0318 13:09:08.742443 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" podUID="b33c6618-ccab-4d62-ab77-1200a2b6f389" containerName="controller-manager" containerID="cri-o://d6b517c993fe7a12be7fb026e3f84c251058faaef715d968729966f3a54737f4" gracePeriod=30 Mar 18 13:09:08.765688 master-0 kubenswrapper[7146]: I0318 13:09:08.765634 7146 patch_prober.go:28] interesting pod/route-controller-manager-68f97cf79f-trbrq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.47:8443/healthz\": dial tcp 10.128.0.47:8443: connect: connection refused" start-of-body= Mar 18 13:09:08.765846 master-0 kubenswrapper[7146]: I0318 13:09:08.765697 7146 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" podUID="b0e359bd-b9ff-42c3-9c2a-037ae05f41d5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.47:8443/healthz\": dial tcp 10.128.0.47:8443: connect: connection refused" Mar 18 13:09:09.043755 master-0 kubenswrapper[7146]: I0318 13:09:09.043710 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:09:09.129074 master-0 kubenswrapper[7146]: I0318 13:09:09.128633 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6nw6\" (UniqueName: \"kubernetes.io/projected/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-kube-api-access-m6nw6\") pod \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\" (UID: \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\") " Mar 18 13:09:09.129074 master-0 kubenswrapper[7146]: I0318 13:09:09.128738 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-serving-cert\") pod \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\" (UID: \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\") " Mar 18 13:09:09.129074 master-0 kubenswrapper[7146]: I0318 13:09:09.128786 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-client-ca\") pod \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\" (UID: \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\") " Mar 18 13:09:09.129074 master-0 kubenswrapper[7146]: I0318 13:09:09.128807 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-config\") pod \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\" (UID: \"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5\") " Mar 18 13:09:09.129460 master-0 kubenswrapper[7146]: I0318 13:09:09.129388 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-client-ca" (OuterVolumeSpecName: "client-ca") pod "b0e359bd-b9ff-42c3-9c2a-037ae05f41d5" (UID: "b0e359bd-b9ff-42c3-9c2a-037ae05f41d5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:09:09.129513 master-0 kubenswrapper[7146]: I0318 13:09:09.129471 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-config" (OuterVolumeSpecName: "config") pod "b0e359bd-b9ff-42c3-9c2a-037ae05f41d5" (UID: "b0e359bd-b9ff-42c3-9c2a-037ae05f41d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:09:09.131963 master-0 kubenswrapper[7146]: I0318 13:09:09.131908 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-kube-api-access-m6nw6" (OuterVolumeSpecName: "kube-api-access-m6nw6") pod "b0e359bd-b9ff-42c3-9c2a-037ae05f41d5" (UID: "b0e359bd-b9ff-42c3-9c2a-037ae05f41d5"). InnerVolumeSpecName "kube-api-access-m6nw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:09:09.132510 master-0 kubenswrapper[7146]: I0318 13:09:09.132455 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b0e359bd-b9ff-42c3-9c2a-037ae05f41d5" (UID: "b0e359bd-b9ff-42c3-9c2a-037ae05f41d5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:09:09.230064 master-0 kubenswrapper[7146]: I0318 13:09:09.229922 7146 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:09.230064 master-0 kubenswrapper[7146]: I0318 13:09:09.230010 7146 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:09.230064 master-0 kubenswrapper[7146]: I0318 13:09:09.230031 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6nw6\" (UniqueName: \"kubernetes.io/projected/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-kube-api-access-m6nw6\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:09.230064 master-0 kubenswrapper[7146]: I0318 13:09:09.230046 7146 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:09.552890 master-0 kubenswrapper[7146]: I0318 13:09:09.552558 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 13:09:09.567517 master-0 kubenswrapper[7146]: I0318 13:09:09.567466 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-srjhk"] Mar 18 13:09:09.571415 master-0 kubenswrapper[7146]: I0318 13:09:09.571198 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:09:09.749921 master-0 kubenswrapper[7146]: I0318 13:09:09.749856 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" event={"ID":"b0e359bd-b9ff-42c3-9c2a-037ae05f41d5","Type":"ContainerDied","Data":"b58767da3cbc4988be1f3bf69998566919f279318ad1a06350e9eba709e90e27"} Mar 18 13:09:09.749921 master-0 kubenswrapper[7146]: I0318 13:09:09.749906 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq" Mar 18 13:09:09.760472 master-0 kubenswrapper[7146]: I0318 13:09:09.749967 7146 scope.go:117] "RemoveContainer" containerID="861371cd07cf9c0a4ae28a200cdeb0dec6fad29b4b4b5448a50e24d192d7c15c" Mar 18 13:09:09.760472 master-0 kubenswrapper[7146]: I0318 13:09:09.751567 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"245f3af1-ccfb-4191-9a34-00852e52a73d","Type":"ContainerStarted","Data":"4b5f4bb0323e76ef7cd02a1d41797e05db5442b3a066933557b53fceaffa8ab5"} Mar 18 13:09:09.760472 master-0 kubenswrapper[7146]: I0318 13:09:09.753060 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srjhk" event={"ID":"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a","Type":"ContainerStarted","Data":"375f229e561ce7c3ba595936a0178638cea02d8b22c4089efd4e83226dfb0f4d"} Mar 18 13:09:10.367861 master-0 kubenswrapper[7146]: I0318 13:09:10.367770 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:09:10.367861 master-0 kubenswrapper[7146]: I0318 13:09:10.367828 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:09:10.765235 master-0 kubenswrapper[7146]: I0318 13:09:10.765179 7146 generic.go:334] "Generic (PLEG): container finished" podID="e2df6721-ccd2-41e5-bfd5-bd8d277dd57a" containerID="1680bc7cb7bda0d4689e9c5a8cd2d140dc338dc26c871d197cd2591dce5e5a99" exitCode=0 Mar 18 13:09:10.765819 master-0 kubenswrapper[7146]: I0318 13:09:10.765249 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srjhk" event={"ID":"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a","Type":"ContainerDied","Data":"1680bc7cb7bda0d4689e9c5a8cd2d140dc338dc26c871d197cd2591dce5e5a99"} Mar 18 13:09:10.766818 master-0 kubenswrapper[7146]: I0318 13:09:10.766746 7146 generic.go:334] "Generic (PLEG): container finished" podID="b33c6618-ccab-4d62-ab77-1200a2b6f389" containerID="d6b517c993fe7a12be7fb026e3f84c251058faaef715d968729966f3a54737f4" exitCode=0 Mar 18 13:09:10.766818 master-0 kubenswrapper[7146]: I0318 13:09:10.766814 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" event={"ID":"b33c6618-ccab-4d62-ab77-1200a2b6f389","Type":"ContainerDied","Data":"d6b517c993fe7a12be7fb026e3f84c251058faaef715d968729966f3a54737f4"} Mar 18 13:09:11.070041 master-0 kubenswrapper[7146]: I0318 13:09:11.066184 7146 patch_prober.go:28] interesting pod/apiserver-574f6d5bf6-8krhk container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 13:09:11.070041 master-0 kubenswrapper[7146]: [+]log ok Mar 18 13:09:11.070041 master-0 kubenswrapper[7146]: [+]etcd ok Mar 18 13:09:11.070041 master-0 kubenswrapper[7146]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 13:09:11.070041 master-0 kubenswrapper[7146]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 13:09:11.070041 master-0 kubenswrapper[7146]: [+]poststarthook/max-in-flight-filter ok Mar 18 13:09:11.070041 master-0 kubenswrapper[7146]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 13:09:11.070041 master-0 kubenswrapper[7146]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 18 13:09:11.070041 master-0 kubenswrapper[7146]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 18 13:09:11.070041 master-0 kubenswrapper[7146]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 18 13:09:11.070041 master-0 kubenswrapper[7146]: [+]poststarthook/project.openshift.io-projectcache ok Mar 18 13:09:11.070041 master-0 kubenswrapper[7146]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 18 13:09:11.070041 master-0 kubenswrapper[7146]: [+]poststarthook/openshift.io-startinformers ok Mar 18 13:09:11.070041 master-0 kubenswrapper[7146]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 18 13:09:11.070041 master-0 kubenswrapper[7146]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 13:09:11.070041 master-0 kubenswrapper[7146]: livez check failed Mar 18 13:09:11.070041 master-0 kubenswrapper[7146]: I0318 13:09:11.066260 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" podUID="b41c9132-92ef-429d-bdd5-9bdb024e04fc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:11.128171 master-0 kubenswrapper[7146]: I0318 13:09:11.128126 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:11.269579 master-0 kubenswrapper[7146]: I0318 13:09:11.269271 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t84qx\" (UniqueName: \"kubernetes.io/projected/b33c6618-ccab-4d62-ab77-1200a2b6f389-kube-api-access-t84qx\") pod \"b33c6618-ccab-4d62-ab77-1200a2b6f389\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " Mar 18 13:09:11.269579 master-0 kubenswrapper[7146]: I0318 13:09:11.269350 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b33c6618-ccab-4d62-ab77-1200a2b6f389-serving-cert\") pod \"b33c6618-ccab-4d62-ab77-1200a2b6f389\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " Mar 18 13:09:11.269579 master-0 kubenswrapper[7146]: I0318 13:09:11.269401 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-proxy-ca-bundles\") pod \"b33c6618-ccab-4d62-ab77-1200a2b6f389\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " Mar 18 13:09:11.269579 master-0 kubenswrapper[7146]: I0318 13:09:11.269437 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-client-ca\") pod \"b33c6618-ccab-4d62-ab77-1200a2b6f389\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " Mar 18 13:09:11.269579 master-0 kubenswrapper[7146]: I0318 13:09:11.269488 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-config\") pod \"b33c6618-ccab-4d62-ab77-1200a2b6f389\" (UID: \"b33c6618-ccab-4d62-ab77-1200a2b6f389\") " Mar 18 13:09:11.270188 master-0 kubenswrapper[7146]: I0318 13:09:11.270105 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-client-ca" (OuterVolumeSpecName: "client-ca") pod "b33c6618-ccab-4d62-ab77-1200a2b6f389" (UID: "b33c6618-ccab-4d62-ab77-1200a2b6f389"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:09:11.270400 master-0 kubenswrapper[7146]: I0318 13:09:11.270373 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b33c6618-ccab-4d62-ab77-1200a2b6f389" (UID: "b33c6618-ccab-4d62-ab77-1200a2b6f389"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:09:11.270770 master-0 kubenswrapper[7146]: I0318 13:09:11.270739 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-config" (OuterVolumeSpecName: "config") pod "b33c6618-ccab-4d62-ab77-1200a2b6f389" (UID: "b33c6618-ccab-4d62-ab77-1200a2b6f389"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:09:11.272188 master-0 kubenswrapper[7146]: I0318 13:09:11.272140 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b33c6618-ccab-4d62-ab77-1200a2b6f389-kube-api-access-t84qx" (OuterVolumeSpecName: "kube-api-access-t84qx") pod "b33c6618-ccab-4d62-ab77-1200a2b6f389" (UID: "b33c6618-ccab-4d62-ab77-1200a2b6f389"). InnerVolumeSpecName "kube-api-access-t84qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:09:11.273320 master-0 kubenswrapper[7146]: I0318 13:09:11.273290 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b33c6618-ccab-4d62-ab77-1200a2b6f389-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b33c6618-ccab-4d62-ab77-1200a2b6f389" (UID: "b33c6618-ccab-4d62-ab77-1200a2b6f389"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:09:11.370624 master-0 kubenswrapper[7146]: I0318 13:09:11.370494 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t84qx\" (UniqueName: \"kubernetes.io/projected/b33c6618-ccab-4d62-ab77-1200a2b6f389-kube-api-access-t84qx\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:11.370624 master-0 kubenswrapper[7146]: I0318 13:09:11.370543 7146 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b33c6618-ccab-4d62-ab77-1200a2b6f389-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:11.370624 master-0 kubenswrapper[7146]: I0318 13:09:11.370556 7146 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:11.370624 master-0 kubenswrapper[7146]: I0318 13:09:11.370568 7146 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:11.370624 master-0 kubenswrapper[7146]: I0318 13:09:11.370579 7146 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b33c6618-ccab-4d62-ab77-1200a2b6f389-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:11.668409 master-0 kubenswrapper[7146]: I0318 13:09:11.666576 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" podStartSLOduration=6.666543165 podStartE2EDuration="6.666543165s" podCreationTimestamp="2026-03-18 13:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:09:09.571278924 +0000 UTC m=+58.379496285" watchObservedRunningTime="2026-03-18 13:09:11.666543165 +0000 UTC m=+60.474760526" Mar 18 13:09:11.779033 master-0 kubenswrapper[7146]: I0318 13:09:11.778971 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"245f3af1-ccfb-4191-9a34-00852e52a73d","Type":"ContainerStarted","Data":"2590a481a145d76e2b7df7ede04cc027447c99a8ab51376b367af34e50c7be34"} Mar 18 13:09:11.781253 master-0 kubenswrapper[7146]: I0318 13:09:11.781208 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" event={"ID":"b33c6618-ccab-4d62-ab77-1200a2b6f389","Type":"ContainerDied","Data":"5eb68d4611fcf7db1c5bff2edfaefdda2e609086c1f702c24d7b83aa3e8009de"} Mar 18 13:09:11.781309 master-0 kubenswrapper[7146]: I0318 13:09:11.781281 7146 scope.go:117] "RemoveContainer" containerID="d6b517c993fe7a12be7fb026e3f84c251058faaef715d968729966f3a54737f4" Mar 18 13:09:11.781361 master-0 kubenswrapper[7146]: I0318 13:09:11.781241 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b977b9447-ssl9l" Mar 18 13:09:14.898310 master-0 kubenswrapper[7146]: I0318 13:09:14.898220 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wl929" Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: I0318 13:09:15.701474 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: I0318 13:09:15.701740 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="82d33ac9-1471-47c5-802c-c267e7c1694f" containerName="installer" containerID="cri-o://d9d3a75725d56154d845d3eafe31cef00c186357af6963fb23afd016af24585b" gracePeriod=30 Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: I0318 13:09:15.703068 7146 patch_prober.go:28] interesting pod/apiserver-574f6d5bf6-8krhk container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: [+]log ok Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: [+]etcd ok Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: [+]poststarthook/max-in-flight-filter ok Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: [+]poststarthook/project.openshift.io-projectcache ok Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: [+]poststarthook/openshift.io-startinformers ok Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: livez check failed Mar 18 13:09:15.706014 master-0 kubenswrapper[7146]: I0318 13:09:15.703139 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" podUID="b41c9132-92ef-429d-bdd5-9bdb024e04fc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:16.091954 master-0 kubenswrapper[7146]: I0318 13:09:16.091797 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq"] Mar 18 13:09:16.102366 master-0 kubenswrapper[7146]: E0318 13:09:16.099952 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33c6618-ccab-4d62-ab77-1200a2b6f389" containerName="controller-manager" Mar 18 13:09:16.102366 master-0 kubenswrapper[7146]: I0318 13:09:16.099994 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33c6618-ccab-4d62-ab77-1200a2b6f389" containerName="controller-manager" Mar 18 13:09:16.102366 master-0 kubenswrapper[7146]: E0318 13:09:16.100014 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e359bd-b9ff-42c3-9c2a-037ae05f41d5" containerName="route-controller-manager" Mar 18 13:09:16.102366 master-0 kubenswrapper[7146]: I0318 13:09:16.100021 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e359bd-b9ff-42c3-9c2a-037ae05f41d5" containerName="route-controller-manager" Mar 18 13:09:16.102366 master-0 kubenswrapper[7146]: I0318 13:09:16.100121 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0e359bd-b9ff-42c3-9c2a-037ae05f41d5" containerName="route-controller-manager" Mar 18 13:09:16.102366 master-0 kubenswrapper[7146]: I0318 13:09:16.100145 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33c6618-ccab-4d62-ab77-1200a2b6f389" containerName="controller-manager" Mar 18 13:09:16.102366 master-0 kubenswrapper[7146]: I0318 13:09:16.100512 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:09:16.104487 master-0 kubenswrapper[7146]: I0318 13:09:16.104445 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 13:09:16.104811 master-0 kubenswrapper[7146]: I0318 13:09:16.104780 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 13:09:16.108006 master-0 kubenswrapper[7146]: I0318 13:09:16.105084 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 13:09:16.108006 master-0 kubenswrapper[7146]: I0318 13:09:16.106648 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 13:09:16.108006 master-0 kubenswrapper[7146]: I0318 13:09:16.106805 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 13:09:16.134006 master-0 kubenswrapper[7146]: I0318 13:09:16.133869 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq"] Mar 18 13:09:16.151689 master-0 kubenswrapper[7146]: I0318 13:09:16.151586 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-client-ca\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:09:16.151689 master-0 kubenswrapper[7146]: I0318 13:09:16.151680 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhzcj\" (UniqueName: \"kubernetes.io/projected/65cfa12a-0711-4fba-8859-73a3f8f250a9-kube-api-access-xhzcj\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:09:16.151961 master-0 kubenswrapper[7146]: I0318 13:09:16.151704 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cfa12a-0711-4fba-8859-73a3f8f250a9-serving-cert\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:09:16.151961 master-0 kubenswrapper[7146]: I0318 13:09:16.151742 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-config\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:09:16.263324 master-0 kubenswrapper[7146]: I0318 13:09:16.257519 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-client-ca\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:09:16.263324 master-0 kubenswrapper[7146]: I0318 13:09:16.257618 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhzcj\" (UniqueName: \"kubernetes.io/projected/65cfa12a-0711-4fba-8859-73a3f8f250a9-kube-api-access-xhzcj\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:09:16.263324 master-0 kubenswrapper[7146]: I0318 13:09:16.257659 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cfa12a-0711-4fba-8859-73a3f8f250a9-serving-cert\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:09:16.263324 master-0 kubenswrapper[7146]: I0318 13:09:16.257818 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-config\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:09:16.264130 master-0 kubenswrapper[7146]: I0318 13:09:16.264079 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-client-ca\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:09:16.265391 master-0 kubenswrapper[7146]: I0318 13:09:16.265337 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-config\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:09:16.271245 master-0 kubenswrapper[7146]: I0318 13:09:16.271200 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cfa12a-0711-4fba-8859-73a3f8f250a9-serving-cert\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:09:16.536073 master-0 kubenswrapper[7146]: I0318 13:09:16.535799 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b977b9447-ssl9l"] Mar 18 13:09:16.548840 master-0 kubenswrapper[7146]: I0318 13:09:16.548794 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhzcj\" (UniqueName: \"kubernetes.io/projected/65cfa12a-0711-4fba-8859-73a3f8f250a9-kube-api-access-xhzcj\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:09:16.549036 master-0 kubenswrapper[7146]: I0318 13:09:16.548907 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b977b9447-ssl9l"] Mar 18 13:09:16.582967 master-0 kubenswrapper[7146]: I0318 13:09:16.582079 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=9.582050362 podStartE2EDuration="9.582050362s" podCreationTimestamp="2026-03-18 13:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:09:16.576704002 +0000 UTC m=+65.384921363" watchObservedRunningTime="2026-03-18 13:09:16.582050362 +0000 UTC m=+65.390267743" Mar 18 13:09:16.604322 master-0 kubenswrapper[7146]: I0318 13:09:16.604252 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 13:09:16.605228 master-0 kubenswrapper[7146]: I0318 13:09:16.605028 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:09:16.626026 master-0 kubenswrapper[7146]: I0318 13:09:16.624422 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 13:09:16.652096 master-0 kubenswrapper[7146]: I0318 13:09:16.652048 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq"] Mar 18 13:09:16.659783 master-0 kubenswrapper[7146]: I0318 13:09:16.659731 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f97cf79f-trbrq"] Mar 18 13:09:16.667746 master-0 kubenswrapper[7146]: I0318 13:09:16.664725 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"f4d88fc1-4e92-432e-ac2c-e1c489b15e93\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:09:16.667746 master-0 kubenswrapper[7146]: I0318 13:09:16.664810 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-kube-api-access\") pod \"installer-2-master-0\" (UID: \"f4d88fc1-4e92-432e-ac2c-e1c489b15e93\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:09:16.667746 master-0 kubenswrapper[7146]: I0318 13:09:16.664885 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-var-lock\") pod \"installer-2-master-0\" (UID: \"f4d88fc1-4e92-432e-ac2c-e1c489b15e93\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:09:16.737322 master-0 kubenswrapper[7146]: I0318 13:09:16.737252 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:09:16.768520 master-0 kubenswrapper[7146]: I0318 13:09:16.767476 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-var-lock\") pod \"installer-2-master-0\" (UID: \"f4d88fc1-4e92-432e-ac2c-e1c489b15e93\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:09:16.768520 master-0 kubenswrapper[7146]: I0318 13:09:16.767550 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"f4d88fc1-4e92-432e-ac2c-e1c489b15e93\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:09:16.768520 master-0 kubenswrapper[7146]: I0318 13:09:16.767588 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-kube-api-access\") pod \"installer-2-master-0\" (UID: \"f4d88fc1-4e92-432e-ac2c-e1c489b15e93\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:09:16.768520 master-0 kubenswrapper[7146]: I0318 13:09:16.768058 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-var-lock\") pod \"installer-2-master-0\" (UID: \"f4d88fc1-4e92-432e-ac2c-e1c489b15e93\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:09:16.768520 master-0 kubenswrapper[7146]: I0318 13:09:16.768107 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"f4d88fc1-4e92-432e-ac2c-e1c489b15e93\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:09:16.793045 master-0 kubenswrapper[7146]: I0318 13:09:16.792556 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-kube-api-access\") pod \"installer-2-master-0\" (UID: \"f4d88fc1-4e92-432e-ac2c-e1c489b15e93\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:09:16.947157 master-0 kubenswrapper[7146]: I0318 13:09:16.947084 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:09:17.370502 master-0 kubenswrapper[7146]: I0318 13:09:17.370445 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0e359bd-b9ff-42c3-9c2a-037ae05f41d5" path="/var/lib/kubelet/pods/b0e359bd-b9ff-42c3-9c2a-037ae05f41d5/volumes" Mar 18 13:09:17.371137 master-0 kubenswrapper[7146]: I0318 13:09:17.371091 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b33c6618-ccab-4d62-ab77-1200a2b6f389" path="/var/lib/kubelet/pods/b33c6618-ccab-4d62-ab77-1200a2b6f389/volumes" Mar 18 13:09:17.396879 master-0 kubenswrapper[7146]: I0318 13:09:17.396795 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5"] Mar 18 13:09:17.397753 master-0 kubenswrapper[7146]: I0318 13:09:17.397718 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5" Mar 18 13:09:17.399399 master-0 kubenswrapper[7146]: I0318 13:09:17.399260 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 13:09:17.405712 master-0 kubenswrapper[7146]: I0318 13:09:17.405663 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5"] Mar 18 13:09:17.477386 master-0 kubenswrapper[7146]: I0318 13:09:17.477344 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w454\" (UniqueName: \"kubernetes.io/projected/933a37fd-d76a-4f60-8dd8-301fb73daf42-kube-api-access-5w454\") pod \"control-plane-machine-set-operator-6f97756bc8-bjpp5\" (UID: \"933a37fd-d76a-4f60-8dd8-301fb73daf42\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5" Mar 18 13:09:17.477657 master-0 kubenswrapper[7146]: I0318 13:09:17.477639 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/933a37fd-d76a-4f60-8dd8-301fb73daf42-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-bjpp5\" (UID: \"933a37fd-d76a-4f60-8dd8-301fb73daf42\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5" Mar 18 13:09:17.579081 master-0 kubenswrapper[7146]: I0318 13:09:17.578969 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w454\" (UniqueName: \"kubernetes.io/projected/933a37fd-d76a-4f60-8dd8-301fb73daf42-kube-api-access-5w454\") pod \"control-plane-machine-set-operator-6f97756bc8-bjpp5\" (UID: \"933a37fd-d76a-4f60-8dd8-301fb73daf42\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5" Mar 18 13:09:17.579081 master-0 kubenswrapper[7146]: I0318 13:09:17.579062 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/933a37fd-d76a-4f60-8dd8-301fb73daf42-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-bjpp5\" (UID: \"933a37fd-d76a-4f60-8dd8-301fb73daf42\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5" Mar 18 13:09:17.583836 master-0 kubenswrapper[7146]: I0318 13:09:17.583792 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/933a37fd-d76a-4f60-8dd8-301fb73daf42-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-bjpp5\" (UID: \"933a37fd-d76a-4f60-8dd8-301fb73daf42\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5" Mar 18 13:09:17.597217 master-0 kubenswrapper[7146]: I0318 13:09:17.597173 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w454\" (UniqueName: \"kubernetes.io/projected/933a37fd-d76a-4f60-8dd8-301fb73daf42-kube-api-access-5w454\") pod \"control-plane-machine-set-operator-6f97756bc8-bjpp5\" (UID: \"933a37fd-d76a-4f60-8dd8-301fb73daf42\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5" Mar 18 13:09:17.730915 master-0 kubenswrapper[7146]: I0318 13:09:17.730826 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5" Mar 18 13:09:17.760026 master-0 kubenswrapper[7146]: I0318 13:09:17.759909 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66b7876dbc-rdzrh"] Mar 18 13:09:17.760902 master-0 kubenswrapper[7146]: I0318 13:09:17.760864 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:09:17.763778 master-0 kubenswrapper[7146]: I0318 13:09:17.762785 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:09:17.765220 master-0 kubenswrapper[7146]: I0318 13:09:17.765168 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:09:17.765454 master-0 kubenswrapper[7146]: I0318 13:09:17.765400 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:09:17.766437 master-0 kubenswrapper[7146]: I0318 13:09:17.766394 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:09:17.766748 master-0 kubenswrapper[7146]: I0318 13:09:17.766659 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:09:17.774978 master-0 kubenswrapper[7146]: I0318 13:09:17.774925 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:09:17.779014 master-0 kubenswrapper[7146]: I0318 13:09:17.778957 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66b7876dbc-rdzrh"] Mar 18 13:09:17.780889 master-0 kubenswrapper[7146]: I0318 13:09:17.780792 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-proxy-ca-bundles\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:09:17.781282 master-0 kubenswrapper[7146]: I0318 13:09:17.781133 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-config\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:09:17.781282 master-0 kubenswrapper[7146]: I0318 13:09:17.781187 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5a93d05-3c8e-4666-9a55-d8f9e902db78-serving-cert\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:09:17.781282 master-0 kubenswrapper[7146]: I0318 13:09:17.781208 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mthwt\" (UniqueName: \"kubernetes.io/projected/a5a93d05-3c8e-4666-9a55-d8f9e902db78-kube-api-access-mthwt\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:09:17.781282 master-0 kubenswrapper[7146]: I0318 13:09:17.781251 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-client-ca\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:09:17.882136 master-0 kubenswrapper[7146]: I0318 13:09:17.882079 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5a93d05-3c8e-4666-9a55-d8f9e902db78-serving-cert\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:09:17.882459 master-0 kubenswrapper[7146]: I0318 13:09:17.882438 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mthwt\" (UniqueName: \"kubernetes.io/projected/a5a93d05-3c8e-4666-9a55-d8f9e902db78-kube-api-access-mthwt\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:09:17.883846 master-0 kubenswrapper[7146]: I0318 13:09:17.883805 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-client-ca\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:09:17.883993 master-0 kubenswrapper[7146]: I0318 13:09:17.882929 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-client-ca\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:09:17.884161 master-0 kubenswrapper[7146]: I0318 13:09:17.884142 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-proxy-ca-bundles\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:09:17.885416 master-0 kubenswrapper[7146]: I0318 13:09:17.885272 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-proxy-ca-bundles\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:09:17.885613 master-0 kubenswrapper[7146]: I0318 13:09:17.885594 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-config\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:09:17.887210 master-0 kubenswrapper[7146]: I0318 13:09:17.886768 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-config\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:09:17.887711 master-0 kubenswrapper[7146]: I0318 13:09:17.887672 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5a93d05-3c8e-4666-9a55-d8f9e902db78-serving-cert\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:09:17.903973 master-0 kubenswrapper[7146]: I0318 13:09:17.903860 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mthwt\" (UniqueName: \"kubernetes.io/projected/a5a93d05-3c8e-4666-9a55-d8f9e902db78-kube-api-access-mthwt\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:09:18.094441 master-0 kubenswrapper[7146]: I0318 13:09:18.094303 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:09:20.373181 master-0 kubenswrapper[7146]: I0318 13:09:20.373144 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:09:20.380327 master-0 kubenswrapper[7146]: I0318 13:09:20.380260 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:09:20.820451 master-0 kubenswrapper[7146]: I0318 13:09:20.819756 7146 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 13:09:20.820451 master-0 kubenswrapper[7146]: I0318 13:09:20.820024 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" containerID="cri-o://8c058013257c1127015215f369eac1266cd3afbabe83f1a844ab4b8fd221030c" gracePeriod=30 Mar 18 13:09:20.820451 master-0 kubenswrapper[7146]: I0318 13:09:20.820153 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" containerID="cri-o://5a3aca2c595b11887478c52a2dc852e00eca370600ab4c4c0f7434b0e3d7365e" gracePeriod=30 Mar 18 13:09:20.823175 master-0 kubenswrapper[7146]: I0318 13:09:20.822914 7146 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 13:09:20.823175 master-0 kubenswrapper[7146]: E0318 13:09:20.823145 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 13:09:20.823175 master-0 kubenswrapper[7146]: I0318 13:09:20.823159 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 13:09:20.823175 master-0 kubenswrapper[7146]: E0318 13:09:20.823174 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 13:09:20.823175 master-0 kubenswrapper[7146]: I0318 13:09:20.823180 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 13:09:20.823384 master-0 kubenswrapper[7146]: I0318 13:09:20.823279 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 13:09:20.823384 master-0 kubenswrapper[7146]: I0318 13:09:20.823294 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 13:09:20.829493 master-0 kubenswrapper[7146]: I0318 13:09:20.829077 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 13:09:20.942588 master-0 kubenswrapper[7146]: I0318 13:09:20.942530 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:20.942588 master-0 kubenswrapper[7146]: I0318 13:09:20.942594 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:20.942813 master-0 kubenswrapper[7146]: I0318 13:09:20.942621 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:20.942813 master-0 kubenswrapper[7146]: I0318 13:09:20.942653 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:20.942813 master-0 kubenswrapper[7146]: I0318 13:09:20.942709 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:20.942813 master-0 kubenswrapper[7146]: I0318 13:09:20.942803 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:21.043821 master-0 kubenswrapper[7146]: I0318 13:09:21.043765 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:21.043821 master-0 kubenswrapper[7146]: I0318 13:09:21.043810 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:21.043821 master-0 kubenswrapper[7146]: I0318 13:09:21.043834 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:21.044116 master-0 kubenswrapper[7146]: I0318 13:09:21.043859 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:21.044116 master-0 kubenswrapper[7146]: I0318 13:09:21.043881 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:21.044116 master-0 kubenswrapper[7146]: I0318 13:09:21.043920 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:21.044116 master-0 kubenswrapper[7146]: I0318 13:09:21.044005 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:21.044116 master-0 kubenswrapper[7146]: I0318 13:09:21.044041 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:21.044116 master-0 kubenswrapper[7146]: I0318 13:09:21.044062 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:21.044116 master-0 kubenswrapper[7146]: I0318 13:09:21.044082 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:21.044116 master-0 kubenswrapper[7146]: I0318 13:09:21.044101 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:21.044116 master-0 kubenswrapper[7146]: I0318 13:09:21.044124 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:09:32.661864 master-0 kubenswrapper[7146]: I0318 13:09:32.661810 7146 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-mqh5c container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Mar 18 13:09:32.661864 master-0 kubenswrapper[7146]: I0318 13:09:32.661868 7146 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" podUID="8ce8e99d-7b02-4bf4-a438-adde851918cb" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Mar 18 13:09:32.909911 master-0 kubenswrapper[7146]: I0318 13:09:32.909849 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gwnt" event={"ID":"1afcb319-16c7-4f27-9db8-21b105a1bdba","Type":"ContainerStarted","Data":"53b6f2a818ea8ebb3e5400c957f24741f167cf9fba80a4984baaa1bdb7229c33"} Mar 18 13:09:32.915394 master-0 kubenswrapper[7146]: I0318 13:09:32.915265 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6vkz" event={"ID":"aeed4251-c92a-49e9-a785-9903d84ca0d6","Type":"ContainerStarted","Data":"899460e45f82d95897613445afb3c5be2cc8dcbea4246a3823b8133d56c197e4"} Mar 18 13:09:32.918734 master-0 kubenswrapper[7146]: I0318 13:09:32.917828 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srjhk" event={"ID":"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a","Type":"ContainerStarted","Data":"a1d12eab964d3773f85fd1c53d44479112b0a13ea00f11b75308ae575b897e5b"} Mar 18 13:09:33.801690 master-0 kubenswrapper[7146]: I0318 13:09:33.801516 7146 patch_prober.go:28] interesting pod/etcd-operator-8544cbcf9c-hmbpl container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 18 13:09:33.801690 master-0 kubenswrapper[7146]: I0318 13:09:33.801595 7146 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" podUID="1bf0ea4e-8b08-488f-b252-39580f46b756" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 18 13:09:33.861052 master-0 kubenswrapper[7146]: E0318 13:09:33.860921 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 13:09:33.861447 master-0 kubenswrapper[7146]: I0318 13:09:33.861422 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 13:09:33.931052 master-0 kubenswrapper[7146]: I0318 13:09:33.930967 7146 generic.go:334] "Generic (PLEG): container finished" podID="aeed4251-c92a-49e9-a785-9903d84ca0d6" containerID="899460e45f82d95897613445afb3c5be2cc8dcbea4246a3823b8133d56c197e4" exitCode=0 Mar 18 13:09:33.931300 master-0 kubenswrapper[7146]: I0318 13:09:33.931180 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6vkz" event={"ID":"aeed4251-c92a-49e9-a785-9903d84ca0d6","Type":"ContainerDied","Data":"899460e45f82d95897613445afb3c5be2cc8dcbea4246a3823b8133d56c197e4"} Mar 18 13:09:33.934119 master-0 kubenswrapper[7146]: I0318 13:09:33.933707 7146 generic.go:334] "Generic (PLEG): container finished" podID="e2df6721-ccd2-41e5-bfd5-bd8d277dd57a" containerID="a1d12eab964d3773f85fd1c53d44479112b0a13ea00f11b75308ae575b897e5b" exitCode=0 Mar 18 13:09:33.934119 master-0 kubenswrapper[7146]: I0318 13:09:33.933790 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srjhk" event={"ID":"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a","Type":"ContainerDied","Data":"a1d12eab964d3773f85fd1c53d44479112b0a13ea00f11b75308ae575b897e5b"} Mar 18 13:09:33.936739 master-0 kubenswrapper[7146]: I0318 13:09:33.936679 7146 generic.go:334] "Generic (PLEG): container finished" podID="1afcb319-16c7-4f27-9db8-21b105a1bdba" containerID="53b6f2a818ea8ebb3e5400c957f24741f167cf9fba80a4984baaa1bdb7229c33" exitCode=0 Mar 18 13:09:33.936810 master-0 kubenswrapper[7146]: I0318 13:09:33.936786 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gwnt" event={"ID":"1afcb319-16c7-4f27-9db8-21b105a1bdba","Type":"ContainerDied","Data":"53b6f2a818ea8ebb3e5400c957f24741f167cf9fba80a4984baaa1bdb7229c33"} Mar 18 13:09:33.939266 master-0 kubenswrapper[7146]: I0318 13:09:33.939226 7146 generic.go:334] "Generic (PLEG): container finished" podID="b282ab6f-702c-44cc-942e-f2320b61d42e" containerID="9f38cda3aa6b5e31f38fae8c4af784750da4a078aa04e400295175dc0d72fc57" exitCode=0 Mar 18 13:09:33.939367 master-0 kubenswrapper[7146]: I0318 13:09:33.939311 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7499" event={"ID":"b282ab6f-702c-44cc-942e-f2320b61d42e","Type":"ContainerDied","Data":"9f38cda3aa6b5e31f38fae8c4af784750da4a078aa04e400295175dc0d72fc57"} Mar 18 13:09:33.941689 master-0 kubenswrapper[7146]: I0318 13:09:33.941660 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"7adc626761eb731fa1dafdc2dfd398ac674d8912cfc7255a4a18d24c7a0eaf32"} Mar 18 13:09:34.336718 master-0 kubenswrapper[7146]: W0318 13:09:34.336545 7146 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-conmon-323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-conmon-323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a.scope: no such file or directory Mar 18 13:09:34.336718 master-0 kubenswrapper[7146]: W0318 13:09:34.336600 7146 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a.scope: no such file or directory Mar 18 13:09:34.467050 master-0 kubenswrapper[7146]: I0318 13:09:34.467017 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:09:34.611589 master-0 kubenswrapper[7146]: I0318 13:09:34.611555 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_d6fab6cf-3b8f-47a6-837a-319049f487e3/installer/0.log" Mar 18 13:09:34.611713 master-0 kubenswrapper[7146]: I0318 13:09:34.611621 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:09:34.776881 master-0 kubenswrapper[7146]: I0318 13:09:34.776819 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6fab6cf-3b8f-47a6-837a-319049f487e3-kubelet-dir\") pod \"d6fab6cf-3b8f-47a6-837a-319049f487e3\" (UID: \"d6fab6cf-3b8f-47a6-837a-319049f487e3\") " Mar 18 13:09:34.777109 master-0 kubenswrapper[7146]: I0318 13:09:34.776911 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d6fab6cf-3b8f-47a6-837a-319049f487e3-var-lock\") pod \"d6fab6cf-3b8f-47a6-837a-319049f487e3\" (UID: \"d6fab6cf-3b8f-47a6-837a-319049f487e3\") " Mar 18 13:09:34.777109 master-0 kubenswrapper[7146]: I0318 13:09:34.776973 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6fab6cf-3b8f-47a6-837a-319049f487e3-kube-api-access\") pod \"d6fab6cf-3b8f-47a6-837a-319049f487e3\" (UID: \"d6fab6cf-3b8f-47a6-837a-319049f487e3\") " Mar 18 13:09:34.777174 master-0 kubenswrapper[7146]: I0318 13:09:34.777078 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6fab6cf-3b8f-47a6-837a-319049f487e3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d6fab6cf-3b8f-47a6-837a-319049f487e3" (UID: "d6fab6cf-3b8f-47a6-837a-319049f487e3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:09:34.777174 master-0 kubenswrapper[7146]: I0318 13:09:34.777148 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6fab6cf-3b8f-47a6-837a-319049f487e3-var-lock" (OuterVolumeSpecName: "var-lock") pod "d6fab6cf-3b8f-47a6-837a-319049f487e3" (UID: "d6fab6cf-3b8f-47a6-837a-319049f487e3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:09:34.782549 master-0 kubenswrapper[7146]: I0318 13:09:34.782485 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6fab6cf-3b8f-47a6-837a-319049f487e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d6fab6cf-3b8f-47a6-837a-319049f487e3" (UID: "d6fab6cf-3b8f-47a6-837a-319049f487e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:09:34.878128 master-0 kubenswrapper[7146]: I0318 13:09:34.877895 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6fab6cf-3b8f-47a6-837a-319049f487e3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:34.878128 master-0 kubenswrapper[7146]: I0318 13:09:34.877952 7146 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6fab6cf-3b8f-47a6-837a-319049f487e3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:34.878128 master-0 kubenswrapper[7146]: I0318 13:09:34.877965 7146 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d6fab6cf-3b8f-47a6-837a-319049f487e3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:34.949751 master-0 kubenswrapper[7146]: I0318 13:09:34.949694 7146 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a" exitCode=0 Mar 18 13:09:34.950112 master-0 kubenswrapper[7146]: I0318 13:09:34.950061 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a"} Mar 18 13:09:34.953063 master-0 kubenswrapper[7146]: I0318 13:09:34.953019 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6vkz" event={"ID":"aeed4251-c92a-49e9-a785-9903d84ca0d6","Type":"ContainerStarted","Data":"bf6103b476cbe5f000efeec38d0e1eab0cb03070f7c4c9474f643324ed27d01a"} Mar 18 13:09:34.959768 master-0 kubenswrapper[7146]: I0318 13:09:34.959173 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gwnt" event={"ID":"1afcb319-16c7-4f27-9db8-21b105a1bdba","Type":"ContainerStarted","Data":"34948da4a943b048ac43dee7b404a72ee1f371c5d4ee08c40e1377c27b42dcd5"} Mar 18 13:09:34.961893 master-0 kubenswrapper[7146]: I0318 13:09:34.961536 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7499" event={"ID":"b282ab6f-702c-44cc-942e-f2320b61d42e","Type":"ContainerStarted","Data":"53a04c73cfc8bd7f1a40a009ff86c9c965937a42458421d21e8c994da8bb91fb"} Mar 18 13:09:34.964040 master-0 kubenswrapper[7146]: I0318 13:09:34.963064 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_d6fab6cf-3b8f-47a6-837a-319049f487e3/installer/0.log" Mar 18 13:09:34.964040 master-0 kubenswrapper[7146]: I0318 13:09:34.963107 7146 generic.go:334] "Generic (PLEG): container finished" podID="d6fab6cf-3b8f-47a6-837a-319049f487e3" containerID="cd5b538d98c70389dadc08a33490f9393909a4971cfc1d30db05717c62e0141d" exitCode=1 Mar 18 13:09:34.964040 master-0 kubenswrapper[7146]: I0318 13:09:34.963159 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"d6fab6cf-3b8f-47a6-837a-319049f487e3","Type":"ContainerDied","Data":"cd5b538d98c70389dadc08a33490f9393909a4971cfc1d30db05717c62e0141d"} Mar 18 13:09:34.964040 master-0 kubenswrapper[7146]: I0318 13:09:34.963178 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"d6fab6cf-3b8f-47a6-837a-319049f487e3","Type":"ContainerDied","Data":"a51d0013ac9432bcbe6b6dfe803b9f4d84a0c049f224c76e5a674061ebc1d68e"} Mar 18 13:09:34.964040 master-0 kubenswrapper[7146]: I0318 13:09:34.963197 7146 scope.go:117] "RemoveContainer" containerID="cd5b538d98c70389dadc08a33490f9393909a4971cfc1d30db05717c62e0141d" Mar 18 13:09:34.964040 master-0 kubenswrapper[7146]: I0318 13:09:34.963282 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:09:34.966980 master-0 kubenswrapper[7146]: I0318 13:09:34.966729 7146 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="25e53d87fc10cbd1352f788562bb532f3ed8f0ccfa5cd8ec598184e45bd58b6c" exitCode=1 Mar 18 13:09:34.966980 master-0 kubenswrapper[7146]: I0318 13:09:34.966784 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"25e53d87fc10cbd1352f788562bb532f3ed8f0ccfa5cd8ec598184e45bd58b6c"} Mar 18 13:09:34.967306 master-0 kubenswrapper[7146]: I0318 13:09:34.967282 7146 scope.go:117] "RemoveContainer" containerID="25e53d87fc10cbd1352f788562bb532f3ed8f0ccfa5cd8ec598184e45bd58b6c" Mar 18 13:09:34.977368 master-0 kubenswrapper[7146]: I0318 13:09:34.977313 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_82d33ac9-1471-47c5-802c-c267e7c1694f/installer/0.log" Mar 18 13:09:34.977455 master-0 kubenswrapper[7146]: I0318 13:09:34.977371 7146 generic.go:334] "Generic (PLEG): container finished" podID="82d33ac9-1471-47c5-802c-c267e7c1694f" containerID="d9d3a75725d56154d845d3eafe31cef00c186357af6963fb23afd016af24585b" exitCode=1 Mar 18 13:09:34.977455 master-0 kubenswrapper[7146]: I0318 13:09:34.977412 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"82d33ac9-1471-47c5-802c-c267e7c1694f","Type":"ContainerDied","Data":"d9d3a75725d56154d845d3eafe31cef00c186357af6963fb23afd016af24585b"} Mar 18 13:09:34.981888 master-0 kubenswrapper[7146]: I0318 13:09:34.981826 7146 scope.go:117] "RemoveContainer" containerID="cd5b538d98c70389dadc08a33490f9393909a4971cfc1d30db05717c62e0141d" Mar 18 13:09:34.982629 master-0 kubenswrapper[7146]: E0318 13:09:34.982434 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd5b538d98c70389dadc08a33490f9393909a4971cfc1d30db05717c62e0141d\": container with ID starting with cd5b538d98c70389dadc08a33490f9393909a4971cfc1d30db05717c62e0141d not found: ID does not exist" containerID="cd5b538d98c70389dadc08a33490f9393909a4971cfc1d30db05717c62e0141d" Mar 18 13:09:34.982629 master-0 kubenswrapper[7146]: I0318 13:09:34.982477 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd5b538d98c70389dadc08a33490f9393909a4971cfc1d30db05717c62e0141d"} err="failed to get container status \"cd5b538d98c70389dadc08a33490f9393909a4971cfc1d30db05717c62e0141d\": rpc error: code = NotFound desc = could not find container \"cd5b538d98c70389dadc08a33490f9393909a4971cfc1d30db05717c62e0141d\": container with ID starting with cd5b538d98c70389dadc08a33490f9393909a4971cfc1d30db05717c62e0141d not found: ID does not exist" Mar 18 13:09:34.982629 master-0 kubenswrapper[7146]: I0318 13:09:34.982503 7146 scope.go:117] "RemoveContainer" containerID="eec5f6ca3a758062e499f6115be65dea726d3162ea11a793f6a93a0de501edcb" Mar 18 13:09:35.120314 master-0 kubenswrapper[7146]: I0318 13:09:35.120222 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_82d33ac9-1471-47c5-802c-c267e7c1694f/installer/0.log" Mar 18 13:09:35.120314 master-0 kubenswrapper[7146]: I0318 13:09:35.120292 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:09:35.188328 master-0 kubenswrapper[7146]: E0318 13:09:35.188275 7146 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:09:35.267036 master-0 kubenswrapper[7146]: I0318 13:09:35.266975 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7gwnt" Mar 18 13:09:35.267036 master-0 kubenswrapper[7146]: I0318 13:09:35.267037 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7gwnt" Mar 18 13:09:35.282000 master-0 kubenswrapper[7146]: I0318 13:09:35.281919 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82d33ac9-1471-47c5-802c-c267e7c1694f-kubelet-dir\") pod \"82d33ac9-1471-47c5-802c-c267e7c1694f\" (UID: \"82d33ac9-1471-47c5-802c-c267e7c1694f\") " Mar 18 13:09:35.282000 master-0 kubenswrapper[7146]: I0318 13:09:35.282001 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/82d33ac9-1471-47c5-802c-c267e7c1694f-var-lock\") pod \"82d33ac9-1471-47c5-802c-c267e7c1694f\" (UID: \"82d33ac9-1471-47c5-802c-c267e7c1694f\") " Mar 18 13:09:35.282232 master-0 kubenswrapper[7146]: I0318 13:09:35.282056 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82d33ac9-1471-47c5-802c-c267e7c1694f-kube-api-access\") pod \"82d33ac9-1471-47c5-802c-c267e7c1694f\" (UID: \"82d33ac9-1471-47c5-802c-c267e7c1694f\") " Mar 18 13:09:35.282662 master-0 kubenswrapper[7146]: I0318 13:09:35.282597 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d33ac9-1471-47c5-802c-c267e7c1694f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "82d33ac9-1471-47c5-802c-c267e7c1694f" (UID: "82d33ac9-1471-47c5-802c-c267e7c1694f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:09:35.282735 master-0 kubenswrapper[7146]: I0318 13:09:35.282690 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d33ac9-1471-47c5-802c-c267e7c1694f-var-lock" (OuterVolumeSpecName: "var-lock") pod "82d33ac9-1471-47c5-802c-c267e7c1694f" (UID: "82d33ac9-1471-47c5-802c-c267e7c1694f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:09:35.284818 master-0 kubenswrapper[7146]: I0318 13:09:35.284767 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82d33ac9-1471-47c5-802c-c267e7c1694f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "82d33ac9-1471-47c5-802c-c267e7c1694f" (UID: "82d33ac9-1471-47c5-802c-c267e7c1694f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:09:35.385691 master-0 kubenswrapper[7146]: I0318 13:09:35.385459 7146 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82d33ac9-1471-47c5-802c-c267e7c1694f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:35.385691 master-0 kubenswrapper[7146]: I0318 13:09:35.385551 7146 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/82d33ac9-1471-47c5-802c-c267e7c1694f-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:35.385691 master-0 kubenswrapper[7146]: I0318 13:09:35.385563 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82d33ac9-1471-47c5-802c-c267e7c1694f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:35.983952 master-0 kubenswrapper[7146]: I0318 13:09:35.983884 7146 generic.go:334] "Generic (PLEG): container finished" podID="f32b4d4d-df54-4fa7-a940-297e064fea44" containerID="94d2bc335ae0ececbd31f7ab13a8fd2ea166534945dafb090b610544f37ca4e7" exitCode=0 Mar 18 13:09:35.984618 master-0 kubenswrapper[7146]: I0318 13:09:35.983966 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"f32b4d4d-df54-4fa7-a940-297e064fea44","Type":"ContainerDied","Data":"94d2bc335ae0ececbd31f7ab13a8fd2ea166534945dafb090b610544f37ca4e7"} Mar 18 13:09:35.988489 master-0 kubenswrapper[7146]: I0318 13:09:35.988391 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38"} Mar 18 13:09:35.989890 master-0 kubenswrapper[7146]: I0318 13:09:35.989846 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_82d33ac9-1471-47c5-802c-c267e7c1694f/installer/0.log" Mar 18 13:09:35.989996 master-0 kubenswrapper[7146]: I0318 13:09:35.989978 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:09:35.990641 master-0 kubenswrapper[7146]: I0318 13:09:35.990605 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"82d33ac9-1471-47c5-802c-c267e7c1694f","Type":"ContainerDied","Data":"481c7dcce61a798c3ae7174386db78f936ce6c972f60de6b89507279a1155768"} Mar 18 13:09:35.990813 master-0 kubenswrapper[7146]: I0318 13:09:35.990788 7146 scope.go:117] "RemoveContainer" containerID="d9d3a75725d56154d845d3eafe31cef00c186357af6963fb23afd016af24585b" Mar 18 13:09:35.995485 master-0 kubenswrapper[7146]: I0318 13:09:35.995456 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srjhk" event={"ID":"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a","Type":"ContainerStarted","Data":"0c2e3826119c940a98a6c86c39bf2421746ab171d1d0885df4be93564bd994a2"} Mar 18 13:09:36.302038 master-0 kubenswrapper[7146]: I0318 13:09:36.301872 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-7gwnt" podUID="1afcb319-16c7-4f27-9db8-21b105a1bdba" containerName="registry-server" probeResult="failure" output=< Mar 18 13:09:36.302038 master-0 kubenswrapper[7146]: timeout: failed to connect service ":50051" within 1s Mar 18 13:09:36.302038 master-0 kubenswrapper[7146]: > Mar 18 13:09:36.339419 master-0 kubenswrapper[7146]: I0318 13:09:36.339355 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s6vkz" Mar 18 13:09:36.339622 master-0 kubenswrapper[7146]: I0318 13:09:36.339438 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s6vkz" Mar 18 13:09:37.002499 master-0 kubenswrapper[7146]: I0318 13:09:37.002454 7146 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="40d2f52b6191fb64bc515d1f7e32cd3a0019730cc68c0ff9674d239a2fee21db" exitCode=1 Mar 18 13:09:37.003235 master-0 kubenswrapper[7146]: I0318 13:09:37.002597 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerDied","Data":"40d2f52b6191fb64bc515d1f7e32cd3a0019730cc68c0ff9674d239a2fee21db"} Mar 18 13:09:37.003235 master-0 kubenswrapper[7146]: I0318 13:09:37.003030 7146 scope.go:117] "RemoveContainer" containerID="40d2f52b6191fb64bc515d1f7e32cd3a0019730cc68c0ff9674d239a2fee21db" Mar 18 13:09:37.274922 master-0 kubenswrapper[7146]: I0318 13:09:37.274844 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 13:09:37.308708 master-0 kubenswrapper[7146]: I0318 13:09:37.308640 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f32b4d4d-df54-4fa7-a940-297e064fea44-kubelet-dir\") pod \"f32b4d4d-df54-4fa7-a940-297e064fea44\" (UID: \"f32b4d4d-df54-4fa7-a940-297e064fea44\") " Mar 18 13:09:37.308708 master-0 kubenswrapper[7146]: I0318 13:09:37.308699 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f32b4d4d-df54-4fa7-a940-297e064fea44-var-lock\") pod \"f32b4d4d-df54-4fa7-a940-297e064fea44\" (UID: \"f32b4d4d-df54-4fa7-a940-297e064fea44\") " Mar 18 13:09:37.308990 master-0 kubenswrapper[7146]: I0318 13:09:37.308746 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f32b4d4d-df54-4fa7-a940-297e064fea44-kube-api-access\") pod \"f32b4d4d-df54-4fa7-a940-297e064fea44\" (UID: \"f32b4d4d-df54-4fa7-a940-297e064fea44\") " Mar 18 13:09:37.308990 master-0 kubenswrapper[7146]: I0318 13:09:37.308779 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f32b4d4d-df54-4fa7-a940-297e064fea44-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f32b4d4d-df54-4fa7-a940-297e064fea44" (UID: "f32b4d4d-df54-4fa7-a940-297e064fea44"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:09:37.308990 master-0 kubenswrapper[7146]: I0318 13:09:37.308838 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f32b4d4d-df54-4fa7-a940-297e064fea44-var-lock" (OuterVolumeSpecName: "var-lock") pod "f32b4d4d-df54-4fa7-a940-297e064fea44" (UID: "f32b4d4d-df54-4fa7-a940-297e064fea44"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:09:37.309072 master-0 kubenswrapper[7146]: I0318 13:09:37.308987 7146 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f32b4d4d-df54-4fa7-a940-297e064fea44-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:37.309072 master-0 kubenswrapper[7146]: I0318 13:09:37.309008 7146 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f32b4d4d-df54-4fa7-a940-297e064fea44-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:37.311735 master-0 kubenswrapper[7146]: I0318 13:09:37.311674 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f32b4d4d-df54-4fa7-a940-297e064fea44-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f32b4d4d-df54-4fa7-a940-297e064fea44" (UID: "f32b4d4d-df54-4fa7-a940-297e064fea44"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:09:37.380037 master-0 kubenswrapper[7146]: I0318 13:09:37.379928 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s6vkz" podUID="aeed4251-c92a-49e9-a785-9903d84ca0d6" containerName="registry-server" probeResult="failure" output=< Mar 18 13:09:37.380037 master-0 kubenswrapper[7146]: timeout: failed to connect service ":50051" within 1s Mar 18 13:09:37.380037 master-0 kubenswrapper[7146]: > Mar 18 13:09:37.410265 master-0 kubenswrapper[7146]: I0318 13:09:37.410209 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f32b4d4d-df54-4fa7-a940-297e064fea44-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:37.561515 master-0 kubenswrapper[7146]: I0318 13:09:37.561378 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-srjhk" Mar 18 13:09:37.561858 master-0 kubenswrapper[7146]: I0318 13:09:37.561815 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-srjhk" Mar 18 13:09:37.619308 master-0 kubenswrapper[7146]: I0318 13:09:37.619255 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-srjhk" Mar 18 13:09:38.011094 master-0 kubenswrapper[7146]: I0318 13:09:38.011032 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 13:09:38.011094 master-0 kubenswrapper[7146]: I0318 13:09:38.011041 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"f32b4d4d-df54-4fa7-a940-297e064fea44","Type":"ContainerDied","Data":"1b06475f72c4aa178a3711e3bf8a803b73ed7bca27bffed7ac62aefe98506c3d"} Mar 18 13:09:38.011094 master-0 kubenswrapper[7146]: I0318 13:09:38.011096 7146 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b06475f72c4aa178a3711e3bf8a803b73ed7bca27bffed7ac62aefe98506c3d" Mar 18 13:09:38.013861 master-0 kubenswrapper[7146]: I0318 13:09:38.013810 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"fa1d385ac095a8d1dc31f1e6dbbfd78274773bc8abd30fc3ee99e963ef88d538"} Mar 18 13:09:39.059472 master-0 kubenswrapper[7146]: I0318 13:09:39.059407 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-srjhk" Mar 18 13:09:40.055495 master-0 kubenswrapper[7146]: I0318 13:09:40.055256 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:09:42.661716 master-0 kubenswrapper[7146]: I0318 13:09:42.661621 7146 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-mqh5c container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Mar 18 13:09:42.661716 master-0 kubenswrapper[7146]: I0318 13:09:42.661715 7146 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" podUID="8ce8e99d-7b02-4bf4-a438-adde851918cb" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Mar 18 13:09:44.245354 master-0 kubenswrapper[7146]: E0318 13:09:44.245181 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:09:34Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:09:34Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:09:34Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:09:34Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578\\\"],\\\"sizeBytes\\\":862657321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113\\\"],\\\"sizeBytes\\\":484450894},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97\\\"],\\\"sizeBytes\\\":470826739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62\\\"],\\\"sizeBytes\\\":458126937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe\\\"],\\\"sizeBytes\\\":456576198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739\\\"],\\\"sizeBytes\\\":448828620},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0\\\"],\\\"sizeBytes\\\":443272037},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483\\\"],\\\"sizeBytes\\\":438654374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e\\\"],\\\"sizeBytes\\\":411587146},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014\\\"],\\\"sizeBytes\\\":407347125},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422\\\"],\\\"sizeBytes\\\":396521761}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:09:44.443465 master-0 kubenswrapper[7146]: I0318 13:09:44.443392 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:09:44.870183 master-0 kubenswrapper[7146]: I0318 13:09:44.870102 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p7499" Mar 18 13:09:44.870183 master-0 kubenswrapper[7146]: I0318 13:09:44.870199 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p7499" Mar 18 13:09:44.909715 master-0 kubenswrapper[7146]: I0318 13:09:44.909666 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p7499" Mar 18 13:09:45.113445 master-0 kubenswrapper[7146]: I0318 13:09:45.113382 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p7499" Mar 18 13:09:45.188924 master-0 kubenswrapper[7146]: E0318 13:09:45.188853 7146 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:09:45.319845 master-0 kubenswrapper[7146]: I0318 13:09:45.319800 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7gwnt" Mar 18 13:09:45.368815 master-0 kubenswrapper[7146]: I0318 13:09:45.368775 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7gwnt" Mar 18 13:09:46.371483 master-0 kubenswrapper[7146]: I0318 13:09:46.371432 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s6vkz" Mar 18 13:09:46.407735 master-0 kubenswrapper[7146]: I0318 13:09:46.407707 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s6vkz" Mar 18 13:09:47.444680 master-0 kubenswrapper[7146]: I0318 13:09:47.444524 7146 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:09:47.957664 master-0 kubenswrapper[7146]: E0318 13:09:47.956837 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 13:09:48.067162 master-0 kubenswrapper[7146]: I0318 13:09:48.067095 7146 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="5a3aca2c595b11887478c52a2dc852e00eca370600ab4c4c0f7434b0e3d7365e" exitCode=0 Mar 18 13:09:49.078890 master-0 kubenswrapper[7146]: I0318 13:09:49.078707 7146 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="82e68aa3646d78dccb543b630e55b612be0fbbc4dcea9fa843c34bed76f82c4d" exitCode=0 Mar 18 13:09:49.078890 master-0 kubenswrapper[7146]: I0318 13:09:49.078765 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"82e68aa3646d78dccb543b630e55b612be0fbbc4dcea9fa843c34bed76f82c4d"} Mar 18 13:09:49.081535 master-0 kubenswrapper[7146]: I0318 13:09:49.081453 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_88cd8323-8857-41fe-85d4-e6064330ec71/installer/0.log" Mar 18 13:09:49.081649 master-0 kubenswrapper[7146]: I0318 13:09:49.081543 7146 generic.go:334] "Generic (PLEG): container finished" podID="88cd8323-8857-41fe-85d4-e6064330ec71" containerID="2930eafa2605e45a0822de041f245bf9aca0638ca211202bfcc70902ad20170b" exitCode=1 Mar 18 13:09:49.081649 master-0 kubenswrapper[7146]: I0318 13:09:49.081581 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"88cd8323-8857-41fe-85d4-e6064330ec71","Type":"ContainerDied","Data":"2930eafa2605e45a0822de041f245bf9aca0638ca211202bfcc70902ad20170b"} Mar 18 13:09:50.343499 master-0 kubenswrapper[7146]: I0318 13:09:50.343447 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_88cd8323-8857-41fe-85d4-e6064330ec71/installer/0.log" Mar 18 13:09:50.344265 master-0 kubenswrapper[7146]: I0318 13:09:50.343526 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:09:50.379612 master-0 kubenswrapper[7146]: I0318 13:09:50.379554 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88cd8323-8857-41fe-85d4-e6064330ec71-kubelet-dir\") pod \"88cd8323-8857-41fe-85d4-e6064330ec71\" (UID: \"88cd8323-8857-41fe-85d4-e6064330ec71\") " Mar 18 13:09:50.379821 master-0 kubenswrapper[7146]: I0318 13:09:50.379630 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88cd8323-8857-41fe-85d4-e6064330ec71-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "88cd8323-8857-41fe-85d4-e6064330ec71" (UID: "88cd8323-8857-41fe-85d4-e6064330ec71"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:09:50.379821 master-0 kubenswrapper[7146]: I0318 13:09:50.379661 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88cd8323-8857-41fe-85d4-e6064330ec71-kube-api-access\") pod \"88cd8323-8857-41fe-85d4-e6064330ec71\" (UID: \"88cd8323-8857-41fe-85d4-e6064330ec71\") " Mar 18 13:09:50.379821 master-0 kubenswrapper[7146]: I0318 13:09:50.379736 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88cd8323-8857-41fe-85d4-e6064330ec71-var-lock\") pod \"88cd8323-8857-41fe-85d4-e6064330ec71\" (UID: \"88cd8323-8857-41fe-85d4-e6064330ec71\") " Mar 18 13:09:50.379821 master-0 kubenswrapper[7146]: I0318 13:09:50.379813 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88cd8323-8857-41fe-85d4-e6064330ec71-var-lock" (OuterVolumeSpecName: "var-lock") pod "88cd8323-8857-41fe-85d4-e6064330ec71" (UID: "88cd8323-8857-41fe-85d4-e6064330ec71"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:09:50.380348 master-0 kubenswrapper[7146]: I0318 13:09:50.380318 7146 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88cd8323-8857-41fe-85d4-e6064330ec71-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:50.380348 master-0 kubenswrapper[7146]: I0318 13:09:50.380338 7146 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88cd8323-8857-41fe-85d4-e6064330ec71-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:50.383075 master-0 kubenswrapper[7146]: I0318 13:09:50.383027 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88cd8323-8857-41fe-85d4-e6064330ec71-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "88cd8323-8857-41fe-85d4-e6064330ec71" (UID: "88cd8323-8857-41fe-85d4-e6064330ec71"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:09:50.481678 master-0 kubenswrapper[7146]: I0318 13:09:50.481607 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88cd8323-8857-41fe-85d4-e6064330ec71-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:50.920609 master-0 kubenswrapper[7146]: I0318 13:09:50.920541 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 18 13:09:50.920777 master-0 kubenswrapper[7146]: I0318 13:09:50.920622 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:09:50.987483 master-0 kubenswrapper[7146]: I0318 13:09:50.987377 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 18 13:09:50.987483 master-0 kubenswrapper[7146]: I0318 13:09:50.987449 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 18 13:09:50.987793 master-0 kubenswrapper[7146]: I0318 13:09:50.987534 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs" (OuterVolumeSpecName: "certs") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:09:50.987793 master-0 kubenswrapper[7146]: I0318 13:09:50.987587 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir" (OuterVolumeSpecName: "data-dir") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:09:50.987793 master-0 kubenswrapper[7146]: I0318 13:09:50.987669 7146 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:50.987793 master-0 kubenswrapper[7146]: I0318 13:09:50.987681 7146 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:09:51.095292 master-0 kubenswrapper[7146]: I0318 13:09:51.095091 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_88cd8323-8857-41fe-85d4-e6064330ec71/installer/0.log" Mar 18 13:09:51.095292 master-0 kubenswrapper[7146]: I0318 13:09:51.095226 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"88cd8323-8857-41fe-85d4-e6064330ec71","Type":"ContainerDied","Data":"0d931af2c5d54a586a9cb21f694a9dbf73198cb23716b2134948c1a2dbbd5bc6"} Mar 18 13:09:51.095292 master-0 kubenswrapper[7146]: I0318 13:09:51.095268 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:09:51.095292 master-0 kubenswrapper[7146]: I0318 13:09:51.095290 7146 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d931af2c5d54a586a9cb21f694a9dbf73198cb23716b2134948c1a2dbbd5bc6" Mar 18 13:09:51.097502 master-0 kubenswrapper[7146]: I0318 13:09:51.097448 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 18 13:09:51.097696 master-0 kubenswrapper[7146]: I0318 13:09:51.097508 7146 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="8c058013257c1127015215f369eac1266cd3afbabe83f1a844ab4b8fd221030c" exitCode=137 Mar 18 13:09:51.097696 master-0 kubenswrapper[7146]: I0318 13:09:51.097560 7146 scope.go:117] "RemoveContainer" containerID="5a3aca2c595b11887478c52a2dc852e00eca370600ab4c4c0f7434b0e3d7365e" Mar 18 13:09:51.097983 master-0 kubenswrapper[7146]: I0318 13:09:51.097860 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:09:51.111899 master-0 kubenswrapper[7146]: I0318 13:09:51.110479 7146 scope.go:117] "RemoveContainer" containerID="8c058013257c1127015215f369eac1266cd3afbabe83f1a844ab4b8fd221030c" Mar 18 13:09:51.131707 master-0 kubenswrapper[7146]: I0318 13:09:51.131614 7146 scope.go:117] "RemoveContainer" containerID="5a3aca2c595b11887478c52a2dc852e00eca370600ab4c4c0f7434b0e3d7365e" Mar 18 13:09:51.132090 master-0 kubenswrapper[7146]: E0318 13:09:51.131996 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a3aca2c595b11887478c52a2dc852e00eca370600ab4c4c0f7434b0e3d7365e\": container with ID starting with 5a3aca2c595b11887478c52a2dc852e00eca370600ab4c4c0f7434b0e3d7365e not found: ID does not exist" containerID="5a3aca2c595b11887478c52a2dc852e00eca370600ab4c4c0f7434b0e3d7365e" Mar 18 13:09:51.132090 master-0 kubenswrapper[7146]: I0318 13:09:51.132025 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a3aca2c595b11887478c52a2dc852e00eca370600ab4c4c0f7434b0e3d7365e"} err="failed to get container status \"5a3aca2c595b11887478c52a2dc852e00eca370600ab4c4c0f7434b0e3d7365e\": rpc error: code = NotFound desc = could not find container \"5a3aca2c595b11887478c52a2dc852e00eca370600ab4c4c0f7434b0e3d7365e\": container with ID starting with 5a3aca2c595b11887478c52a2dc852e00eca370600ab4c4c0f7434b0e3d7365e not found: ID does not exist" Mar 18 13:09:51.132090 master-0 kubenswrapper[7146]: I0318 13:09:51.132045 7146 scope.go:117] "RemoveContainer" containerID="8c058013257c1127015215f369eac1266cd3afbabe83f1a844ab4b8fd221030c" Mar 18 13:09:51.132248 master-0 kubenswrapper[7146]: E0318 13:09:51.132230 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c058013257c1127015215f369eac1266cd3afbabe83f1a844ab4b8fd221030c\": container with ID starting with 8c058013257c1127015215f369eac1266cd3afbabe83f1a844ab4b8fd221030c not found: ID does not exist" containerID="8c058013257c1127015215f369eac1266cd3afbabe83f1a844ab4b8fd221030c" Mar 18 13:09:51.132313 master-0 kubenswrapper[7146]: I0318 13:09:51.132252 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c058013257c1127015215f369eac1266cd3afbabe83f1a844ab4b8fd221030c"} err="failed to get container status \"8c058013257c1127015215f369eac1266cd3afbabe83f1a844ab4b8fd221030c\": rpc error: code = NotFound desc = could not find container \"8c058013257c1127015215f369eac1266cd3afbabe83f1a844ab4b8fd221030c\": container with ID starting with 8c058013257c1127015215f369eac1266cd3afbabe83f1a844ab4b8fd221030c not found: ID does not exist" Mar 18 13:09:51.371999 master-0 kubenswrapper[7146]: I0318 13:09:51.370631 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d664a6d0d2a24360dee10612610f1b59" path="/var/lib/kubelet/pods/d664a6d0d2a24360dee10612610f1b59/volumes" Mar 18 13:09:51.371999 master-0 kubenswrapper[7146]: I0318 13:09:51.371436 7146 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 13:09:52.661616 master-0 kubenswrapper[7146]: I0318 13:09:52.661514 7146 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-mqh5c container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Mar 18 13:09:52.662204 master-0 kubenswrapper[7146]: I0318 13:09:52.661631 7146 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" podUID="8ce8e99d-7b02-4bf4-a438-adde851918cb" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Mar 18 13:09:54.246707 master-0 kubenswrapper[7146]: E0318 13:09:54.246623 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes master-0)" Mar 18 13:09:54.829417 master-0 kubenswrapper[7146]: E0318 13:09:54.829246 7146 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189df1801e4a6fc5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:09:20.820146117 +0000 UTC m=+69.628363478,LastTimestamp:2026-03-18 13:09:20.820146117 +0000 UTC m=+69.628363478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:09:55.135209 master-0 kubenswrapper[7146]: I0318 13:09:55.135009 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-mk4d5_8a0944d2-d99a-42eb-81f5-a212b750b8f4/network-operator/0.log" Mar 18 13:09:55.135209 master-0 kubenswrapper[7146]: I0318 13:09:55.135110 7146 generic.go:334] "Generic (PLEG): container finished" podID="8a0944d2-d99a-42eb-81f5-a212b750b8f4" containerID="6b882cdda72d564225a61ad06267c4be93a7acf1cff49af344ca080e3af8cb10" exitCode=255 Mar 18 13:09:55.189838 master-0 kubenswrapper[7146]: E0318 13:09:55.189730 7146 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:09:57.148289 master-0 kubenswrapper[7146]: I0318 13:09:57.148205 7146 generic.go:334] "Generic (PLEG): container finished" podID="1bf0ea4e-8b08-488f-b252-39580f46b756" containerID="fd0bf4a4bcfb53e14fbaa9e4b5ac94436e182002bb238e07513655ae02a57f1d" exitCode=0 Mar 18 13:09:57.444106 master-0 kubenswrapper[7146]: I0318 13:09:57.444005 7146 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:10:02.085745 master-0 kubenswrapper[7146]: E0318 13:10:02.085581 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 13:10:03.185995 master-0 kubenswrapper[7146]: I0318 13:10:03.185926 7146 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="79d04f4871a241f9be5403756ec9bc9fffb125a7eb22092e57838a02fa67798c" exitCode=0 Mar 18 13:10:03.187539 master-0 kubenswrapper[7146]: I0318 13:10:03.187521 7146 generic.go:334] "Generic (PLEG): container finished" podID="c2c4572e-0b38-4db1-96e5-6a35e29048e7" containerID="d02c6c3cdba1a1883c0637cac9a306051c4ef216e0033461edc5cc690bbb087e" exitCode=0 Mar 18 13:10:04.194555 master-0 kubenswrapper[7146]: I0318 13:10:04.194511 7146 generic.go:334] "Generic (PLEG): container finished" podID="8ce8e99d-7b02-4bf4-a438-adde851918cb" containerID="f140128413a59472c05ccbf8a67ba06b17c2bdd86a6d5881d2c8c4864d65b7ae" exitCode=0 Mar 18 13:10:04.248096 master-0 kubenswrapper[7146]: E0318 13:10:04.248037 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes master-0)" Mar 18 13:10:05.190337 master-0 kubenswrapper[7146]: E0318 13:10:05.190282 7146 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:10:07.444750 master-0 kubenswrapper[7146]: I0318 13:10:07.444629 7146 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:10:11.230101 master-0 kubenswrapper[7146]: I0318 13:10:11.230002 7146 generic.go:334] "Generic (PLEG): container finished" podID="83a4f641-d28f-42aa-a228-f6086d720fe4" containerID="f0be59386377b23fb8fc7601c10eb271b7e5a273e5f53453eae290b11eb4345f" exitCode=0 Mar 18 13:10:12.235156 master-0 kubenswrapper[7146]: I0318 13:10:12.235097 7146 generic.go:334] "Generic (PLEG): container finished" podID="e2f2982b-2117-4c16-a4e3-f7e14c7ddc41" containerID="73eeb12fc6c56e08bfbb513524488ba1e9f64fd246eaef82ed0bfd67ecb4ec86" exitCode=0 Mar 18 13:10:14.248886 master-0 kubenswrapper[7146]: E0318 13:10:14.248821 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:10:15.191833 master-0 kubenswrapper[7146]: E0318 13:10:15.191731 7146 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:10:15.191833 master-0 kubenswrapper[7146]: I0318 13:10:15.191791 7146 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 13:10:16.192313 master-0 kubenswrapper[7146]: E0318 13:10:16.192243 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 13:10:16.255828 master-0 kubenswrapper[7146]: I0318 13:10:16.255778 7146 generic.go:334] "Generic (PLEG): container finished" podID="93ea3c78-dede-468f-89a5-551133f794c5" containerID="ef423dc670cb4c823cf16513eca393eb2237d93c1c3d72d4a3125b276f8fdce7" exitCode=0 Mar 18 13:10:17.270524 master-0 kubenswrapper[7146]: I0318 13:10:17.270358 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-xcbtb_eb8907fd-35dd-452a-8032-f2f95a6e553a/approver/0.log" Mar 18 13:10:17.287328 master-0 kubenswrapper[7146]: I0318 13:10:17.270862 7146 generic.go:334] "Generic (PLEG): container finished" podID="eb8907fd-35dd-452a-8032-f2f95a6e553a" containerID="42763f2e1945cdd442dd148f3b0766793cb775dcfcb2d6ede73f97fce1315683" exitCode=1 Mar 18 13:10:24.249512 master-0 kubenswrapper[7146]: E0318 13:10:24.249460 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:10:24.249512 master-0 kubenswrapper[7146]: E0318 13:10:24.249494 7146 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 13:10:25.192688 master-0 kubenswrapper[7146]: E0318 13:10:25.192553 7146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="200ms" Mar 18 13:10:25.373818 master-0 kubenswrapper[7146]: E0318 13:10:25.373749 7146 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:10:25.374598 master-0 kubenswrapper[7146]: E0318 13:10:25.373928 7146 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.017s" Mar 18 13:10:25.374598 master-0 kubenswrapper[7146]: I0318 13:10:25.373962 7146 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:10:25.374793 master-0 kubenswrapper[7146]: I0318 13:10:25.374739 7146 scope.go:117] "RemoveContainer" containerID="f140128413a59472c05ccbf8a67ba06b17c2bdd86a6d5881d2c8c4864d65b7ae" Mar 18 13:10:25.383883 master-0 kubenswrapper[7146]: I0318 13:10:25.383834 7146 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 13:10:27.985991 master-0 kubenswrapper[7146]: E0318 13:10:27.984394 7146 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.61s" Mar 18 13:10:27.999338 master-0 kubenswrapper[7146]: I0318 13:10:27.999299 7146 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 13:10:28.004655 master-0 kubenswrapper[7146]: I0318 13:10:28.004600 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" event={"ID":"8a0944d2-d99a-42eb-81f5-a212b750b8f4","Type":"ContainerDied","Data":"6b882cdda72d564225a61ad06267c4be93a7acf1cff49af344ca080e3af8cb10"} Mar 18 13:10:28.004655 master-0 kubenswrapper[7146]: I0318 13:10:28.004651 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 13:10:28.004803 master-0 kubenswrapper[7146]: I0318 13:10:28.004665 7146 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="e2ac4e86-67bb-4698-bd14-eac99281ebf4" Mar 18 13:10:28.004803 master-0 kubenswrapper[7146]: I0318 13:10:28.004730 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" event={"ID":"1bf0ea4e-8b08-488f-b252-39580f46b756","Type":"ContainerDied","Data":"fd0bf4a4bcfb53e14fbaa9e4b5ac94436e182002bb238e07513655ae02a57f1d"} Mar 18 13:10:28.004803 master-0 kubenswrapper[7146]: I0318 13:10:28.004745 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 13:10:28.004803 master-0 kubenswrapper[7146]: I0318 13:10:28.004757 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"79d04f4871a241f9be5403756ec9bc9fffb125a7eb22092e57838a02fa67798c"} Mar 18 13:10:28.004803 master-0 kubenswrapper[7146]: I0318 13:10:28.004769 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" event={"ID":"c2c4572e-0b38-4db1-96e5-6a35e29048e7","Type":"ContainerDied","Data":"d02c6c3cdba1a1883c0637cac9a306051c4ef216e0033461edc5cc690bbb087e"} Mar 18 13:10:28.004803 master-0 kubenswrapper[7146]: I0318 13:10:28.004781 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5"] Mar 18 13:10:28.004803 master-0 kubenswrapper[7146]: I0318 13:10:28.004791 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" event={"ID":"8ce8e99d-7b02-4bf4-a438-adde851918cb","Type":"ContainerDied","Data":"f140128413a59472c05ccbf8a67ba06b17c2bdd86a6d5881d2c8c4864d65b7ae"} Mar 18 13:10:28.004803 master-0 kubenswrapper[7146]: I0318 13:10:28.004802 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 13:10:28.004803 master-0 kubenswrapper[7146]: I0318 13:10:28.004811 7146 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="e2ac4e86-67bb-4698-bd14-eac99281ebf4" Mar 18 13:10:28.005172 master-0 kubenswrapper[7146]: I0318 13:10:28.004820 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66b7876dbc-rdzrh"] Mar 18 13:10:28.005172 master-0 kubenswrapper[7146]: I0318 13:10:28.005045 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:10:28.005172 master-0 kubenswrapper[7146]: I0318 13:10:28.005057 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq"] Mar 18 13:10:28.005172 master-0 kubenswrapper[7146]: I0318 13:10:28.005067 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" event={"ID":"83a4f641-d28f-42aa-a228-f6086d720fe4","Type":"ContainerDied","Data":"f0be59386377b23fb8fc7601c10eb271b7e5a273e5f53453eae290b11eb4345f"} Mar 18 13:10:28.005172 master-0 kubenswrapper[7146]: I0318 13:10:28.005083 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" event={"ID":"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41","Type":"ContainerDied","Data":"73eeb12fc6c56e08bfbb513524488ba1e9f64fd246eaef82ed0bfd67ecb4ec86"} Mar 18 13:10:28.005172 master-0 kubenswrapper[7146]: I0318 13:10:28.005096 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" event={"ID":"93ea3c78-dede-468f-89a5-551133f794c5","Type":"ContainerDied","Data":"ef423dc670cb4c823cf16513eca393eb2237d93c1c3d72d4a3125b276f8fdce7"} Mar 18 13:10:28.005172 master-0 kubenswrapper[7146]: I0318 13:10:28.005109 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"9b07e126c556fda409310ce3f4ade174ec8553d71974efdb020b209f4f4c150c"} Mar 18 13:10:28.005172 master-0 kubenswrapper[7146]: I0318 13:10:28.005119 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"a549d429b07d4ad9ab9bd2e9ea8b69fcd59fa7572d07446c525cdda08c760cbc"} Mar 18 13:10:28.005172 master-0 kubenswrapper[7146]: I0318 13:10:28.005126 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"a34f9c4245670ed076e7d03ac590e3edd882ed0cb3f0c4cb78b2d6e573c86dd6"} Mar 18 13:10:28.005172 master-0 kubenswrapper[7146]: I0318 13:10:28.005134 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"62fe9beeb36b3ac56bbe68283ac1166884e88bd12efe7396dc640369daf3dcb5"} Mar 18 13:10:28.005172 master-0 kubenswrapper[7146]: I0318 13:10:28.005142 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-xcbtb" event={"ID":"eb8907fd-35dd-452a-8032-f2f95a6e553a","Type":"ContainerDied","Data":"42763f2e1945cdd442dd148f3b0766793cb775dcfcb2d6ede73f97fce1315683"} Mar 18 13:10:28.005172 master-0 kubenswrapper[7146]: I0318 13:10:28.005152 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"8a562044d18e998152ec295ff2940b94453d06068d629f03893f3bd11dac494f"} Mar 18 13:10:28.005172 master-0 kubenswrapper[7146]: I0318 13:10:28.005160 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" event={"ID":"8ce8e99d-7b02-4bf4-a438-adde851918cb","Type":"ContainerStarted","Data":"fed27cffabccc869ac986a2ece7d1eaa066ccade65777fdb3d022f0aa2c6568c"} Mar 18 13:10:28.005836 master-0 kubenswrapper[7146]: I0318 13:10:28.005813 7146 scope.go:117] "RemoveContainer" containerID="42763f2e1945cdd442dd148f3b0766793cb775dcfcb2d6ede73f97fce1315683" Mar 18 13:10:28.006916 master-0 kubenswrapper[7146]: I0318 13:10:28.006885 7146 scope.go:117] "RemoveContainer" containerID="6b882cdda72d564225a61ad06267c4be93a7acf1cff49af344ca080e3af8cb10" Mar 18 13:10:28.012808 master-0 kubenswrapper[7146]: I0318 13:10:28.011904 7146 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 18 13:10:28.012808 master-0 kubenswrapper[7146]: I0318 13:10:28.012036 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38" gracePeriod=30 Mar 18 13:10:28.013113 master-0 kubenswrapper[7146]: I0318 13:10:28.013001 7146 scope.go:117] "RemoveContainer" containerID="73eeb12fc6c56e08bfbb513524488ba1e9f64fd246eaef82ed0bfd67ecb4ec86" Mar 18 13:10:28.013767 master-0 kubenswrapper[7146]: I0318 13:10:28.013458 7146 scope.go:117] "RemoveContainer" containerID="d02c6c3cdba1a1883c0637cac9a306051c4ef216e0033461edc5cc690bbb087e" Mar 18 13:10:28.013767 master-0 kubenswrapper[7146]: I0318 13:10:28.013601 7146 scope.go:117] "RemoveContainer" containerID="ef423dc670cb4c823cf16513eca393eb2237d93c1c3d72d4a3125b276f8fdce7" Mar 18 13:10:28.014091 master-0 kubenswrapper[7146]: I0318 13:10:28.013766 7146 scope.go:117] "RemoveContainer" containerID="f0be59386377b23fb8fc7601c10eb271b7e5a273e5f53453eae290b11eb4345f" Mar 18 13:10:28.016087 master-0 kubenswrapper[7146]: I0318 13:10:28.016026 7146 scope.go:117] "RemoveContainer" containerID="fd0bf4a4bcfb53e14fbaa9e4b5ac94436e182002bb238e07513655ae02a57f1d" Mar 18 13:10:28.053237 master-0 kubenswrapper[7146]: W0318 13:10:28.043946 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65cfa12a_0711_4fba_8859_73a3f8f250a9.slice/crio-880004505fafdd74bc0fa1479c8dc9293b280d360df6bd0f451f11d33a5d6e7c WatchSource:0}: Error finding container 880004505fafdd74bc0fa1479c8dc9293b280d360df6bd0f451f11d33a5d6e7c: Status 404 returned error can't find the container with id 880004505fafdd74bc0fa1479c8dc9293b280d360df6bd0f451f11d33a5d6e7c Mar 18 13:10:28.068174 master-0 kubenswrapper[7146]: I0318 13:10:28.066016 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p7499" podStartSLOduration=55.341053865 podStartE2EDuration="1m24.065995246s" podCreationTimestamp="2026-03-18 13:09:04 +0000 UTC" firstStartedPulling="2026-03-18 13:09:05.715721702 +0000 UTC m=+54.523939073" lastFinishedPulling="2026-03-18 13:09:34.440663093 +0000 UTC m=+83.248880454" observedRunningTime="2026-03-18 13:10:28.063917047 +0000 UTC m=+136.872134408" watchObservedRunningTime="2026-03-18 13:10:28.065995246 +0000 UTC m=+136.874212617" Mar 18 13:10:28.113028 master-0 kubenswrapper[7146]: I0318 13:10:28.112952 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-srjhk" podStartSLOduration=58.109494625 podStartE2EDuration="1m21.112918895s" podCreationTimestamp="2026-03-18 13:09:07 +0000 UTC" firstStartedPulling="2026-03-18 13:09:11.783717677 +0000 UTC m=+60.591935038" lastFinishedPulling="2026-03-18 13:09:34.787141947 +0000 UTC m=+83.595359308" observedRunningTime="2026-03-18 13:10:28.091511399 +0000 UTC m=+136.899728790" watchObservedRunningTime="2026-03-18 13:10:28.112918895 +0000 UTC m=+136.921136256" Mar 18 13:10:28.174856 master-0 kubenswrapper[7146]: I0318 13:10:28.174488 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7gwnt" podStartSLOduration=56.400270885 podStartE2EDuration="1m24.174465949s" podCreationTimestamp="2026-03-18 13:09:04 +0000 UTC" firstStartedPulling="2026-03-18 13:09:06.724089783 +0000 UTC m=+55.532307144" lastFinishedPulling="2026-03-18 13:09:34.498284847 +0000 UTC m=+83.306502208" observedRunningTime="2026-03-18 13:10:28.170850927 +0000 UTC m=+136.979068288" watchObservedRunningTime="2026-03-18 13:10:28.174465949 +0000 UTC m=+136.982683310" Mar 18 13:10:28.268307 master-0 kubenswrapper[7146]: I0318 13:10:28.267542 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 13:10:28.271248 master-0 kubenswrapper[7146]: I0318 13:10:28.271198 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 13:10:28.277052 master-0 kubenswrapper[7146]: I0318 13:10:28.276985 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s6vkz" podStartSLOduration=56.568904459 podStartE2EDuration="1m23.276967534s" podCreationTimestamp="2026-03-18 13:09:05 +0000 UTC" firstStartedPulling="2026-03-18 13:09:07.733665968 +0000 UTC m=+56.541883329" lastFinishedPulling="2026-03-18 13:09:34.441729043 +0000 UTC m=+83.249946404" observedRunningTime="2026-03-18 13:10:28.274847014 +0000 UTC m=+137.083064385" watchObservedRunningTime="2026-03-18 13:10:28.276967534 +0000 UTC m=+137.085184915" Mar 18 13:10:28.330731 master-0 kubenswrapper[7146]: I0318 13:10:28.330686 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 13:10:28.342131 master-0 kubenswrapper[7146]: I0318 13:10:28.342086 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 13:10:28.349634 master-0 kubenswrapper[7146]: I0318 13:10:28.349478 7146 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38" exitCode=2 Mar 18 13:10:28.349634 master-0 kubenswrapper[7146]: I0318 13:10:28.349548 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38"} Mar 18 13:10:28.349634 master-0 kubenswrapper[7146]: I0318 13:10:28.349588 7146 scope.go:117] "RemoveContainer" containerID="25e53d87fc10cbd1352f788562bb532f3ed8f0ccfa5cd8ec598184e45bd58b6c" Mar 18 13:10:28.352296 master-0 kubenswrapper[7146]: I0318 13:10:28.352236 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" event={"ID":"83a4f641-d28f-42aa-a228-f6086d720fe4","Type":"ContainerStarted","Data":"301ee01a7a66e1cd68183a5b8216addd536e84e88bdf6811d11781e9862352fa"} Mar 18 13:10:28.354641 master-0 kubenswrapper[7146]: I0318 13:10:28.354608 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" event={"ID":"65cfa12a-0711-4fba-8859-73a3f8f250a9","Type":"ContainerStarted","Data":"880004505fafdd74bc0fa1479c8dc9293b280d360df6bd0f451f11d33a5d6e7c"} Mar 18 13:10:28.356434 master-0 kubenswrapper[7146]: I0318 13:10:28.356384 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-xcbtb_eb8907fd-35dd-452a-8032-f2f95a6e553a/approver/0.log" Mar 18 13:10:28.356841 master-0 kubenswrapper[7146]: I0318 13:10:28.356816 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-xcbtb" event={"ID":"eb8907fd-35dd-452a-8032-f2f95a6e553a","Type":"ContainerStarted","Data":"0e76cffa571436858041a59dc3cb08e8f19f5b925a773925f3208413a9e44b8f"} Mar 18 13:10:28.359993 master-0 kubenswrapper[7146]: I0318 13:10:28.359894 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" event={"ID":"a5a93d05-3c8e-4666-9a55-d8f9e902db78","Type":"ContainerStarted","Data":"2fb5e5e8607f93dafe9cc4e7936985507a00d052cc2ac3e0c096e4455936f109"} Mar 18 13:10:28.362200 master-0 kubenswrapper[7146]: I0318 13:10:28.362155 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"f4d88fc1-4e92-432e-ac2c-e1c489b15e93","Type":"ContainerStarted","Data":"4c416409750419b3738641dbf762d8e4ba531250589956be62e2ee0593e39b8a"} Mar 18 13:10:28.363651 master-0 kubenswrapper[7146]: I0318 13:10:28.363602 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5" event={"ID":"933a37fd-d76a-4f60-8dd8-301fb73daf42","Type":"ContainerStarted","Data":"0dfd132ca6d17d71f64272cbf05802b2cf41d07648dbd09346eab0774ba709b2"} Mar 18 13:10:28.408445 master-0 kubenswrapper[7146]: I0318 13:10:28.406976 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_245f3af1-ccfb-4191-9a34-00852e52a73d/installer/0.log" Mar 18 13:10:28.408445 master-0 kubenswrapper[7146]: I0318 13:10:28.407030 7146 generic.go:334] "Generic (PLEG): container finished" podID="245f3af1-ccfb-4191-9a34-00852e52a73d" containerID="2590a481a145d76e2b7df7ede04cc027447c99a8ab51376b367af34e50c7be34" exitCode=1 Mar 18 13:10:28.408445 master-0 kubenswrapper[7146]: I0318 13:10:28.407089 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"245f3af1-ccfb-4191-9a34-00852e52a73d","Type":"ContainerDied","Data":"2590a481a145d76e2b7df7ede04cc027447c99a8ab51376b367af34e50c7be34"} Mar 18 13:10:28.862536 master-0 kubenswrapper[7146]: I0318 13:10:28.862383 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 13:10:29.363900 master-0 kubenswrapper[7146]: I0318 13:10:29.363841 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82d33ac9-1471-47c5-802c-c267e7c1694f" path="/var/lib/kubelet/pods/82d33ac9-1471-47c5-802c-c267e7c1694f/volumes" Mar 18 13:10:29.364443 master-0 kubenswrapper[7146]: I0318 13:10:29.364394 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6fab6cf-3b8f-47a6-837a-319049f487e3" path="/var/lib/kubelet/pods/d6fab6cf-3b8f-47a6-837a-319049f487e3/volumes" Mar 18 13:10:29.424178 master-0 kubenswrapper[7146]: I0318 13:10:29.424100 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" event={"ID":"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41","Type":"ContainerStarted","Data":"40c575e7626998887d2a15472abe8dad00760b420577e45054ed5616a705862d"} Mar 18 13:10:29.426144 master-0 kubenswrapper[7146]: I0318 13:10:29.426115 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" event={"ID":"1bf0ea4e-8b08-488f-b252-39580f46b756","Type":"ContainerStarted","Data":"cdeecfaffa91bced4d378bfbb335379410c275c90260acdb4404f15430b5fb3b"} Mar 18 13:10:29.429025 master-0 kubenswrapper[7146]: I0318 13:10:29.428985 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" event={"ID":"c2c4572e-0b38-4db1-96e5-6a35e29048e7","Type":"ContainerStarted","Data":"d96c2656b3be9b7ce731b99cc6e6159cd56ae4448050073934a71abeecbf6860"} Mar 18 13:10:29.431051 master-0 kubenswrapper[7146]: I0318 13:10:29.431011 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"893603396e8e800b4788d5b94c9dd4a16cacde6fac87a290be4cef783a9a8d41"} Mar 18 13:10:29.432787 master-0 kubenswrapper[7146]: I0318 13:10:29.432751 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" event={"ID":"65cfa12a-0711-4fba-8859-73a3f8f250a9","Type":"ContainerStarted","Data":"8a450d61a86ca02f43befd316491f266f23f5f89125343df32e08e9b38e85140"} Mar 18 13:10:29.437289 master-0 kubenswrapper[7146]: I0318 13:10:29.437256 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:10:29.440340 master-0 kubenswrapper[7146]: I0318 13:10:29.440308 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" event={"ID":"a5a93d05-3c8e-4666-9a55-d8f9e902db78","Type":"ContainerStarted","Data":"b10031bd90b55a9a696a81d72f5edb8059040095aa52e3160902d05b4a7cd6cf"} Mar 18 13:10:29.442506 master-0 kubenswrapper[7146]: I0318 13:10:29.440609 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:10:29.442680 master-0 kubenswrapper[7146]: I0318 13:10:29.442656 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" event={"ID":"93ea3c78-dede-468f-89a5-551133f794c5","Type":"ContainerStarted","Data":"ecac9e54fbb68035c123d989f9fe8209b26c2fc9d7913909adf91d340c827099"} Mar 18 13:10:29.444764 master-0 kubenswrapper[7146]: I0318 13:10:29.444713 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"f4d88fc1-4e92-432e-ac2c-e1c489b15e93","Type":"ContainerStarted","Data":"3aecc1592a5c76f7851ff01bf9ec75d38c020718af10663c3a3924f329ae17c6"} Mar 18 13:10:29.446072 master-0 kubenswrapper[7146]: I0318 13:10:29.445907 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:10:29.450681 master-0 kubenswrapper[7146]: I0318 13:10:29.450631 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-mk4d5_8a0944d2-d99a-42eb-81f5-a212b750b8f4/network-operator/0.log" Mar 18 13:10:29.450877 master-0 kubenswrapper[7146]: I0318 13:10:29.450767 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" event={"ID":"8a0944d2-d99a-42eb-81f5-a212b750b8f4","Type":"ContainerStarted","Data":"0c0dff018b4cb570a73508a63c98cc78ae5d31ec05596f9eda1e17c194b6b492"} Mar 18 13:10:29.487458 master-0 kubenswrapper[7146]: I0318 13:10:29.487381 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" podStartSLOduration=82.487343921 podStartE2EDuration="1m22.487343921s" podCreationTimestamp="2026-03-18 13:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:10:29.486158847 +0000 UTC m=+138.294376218" watchObservedRunningTime="2026-03-18 13:10:29.487343921 +0000 UTC m=+138.295561272" Mar 18 13:10:29.529989 master-0 kubenswrapper[7146]: I0318 13:10:29.529806 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=73.529784483 podStartE2EDuration="1m13.529784483s" podCreationTimestamp="2026-03-18 13:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:10:29.52684764 +0000 UTC m=+138.335065001" watchObservedRunningTime="2026-03-18 13:10:29.529784483 +0000 UTC m=+138.338001844" Mar 18 13:10:29.649287 master-0 kubenswrapper[7146]: I0318 13:10:29.649163 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" podStartSLOduration=82.649144106 podStartE2EDuration="1m22.649144106s" podCreationTimestamp="2026-03-18 13:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:10:29.646807699 +0000 UTC m=+138.455025070" watchObservedRunningTime="2026-03-18 13:10:29.649144106 +0000 UTC m=+138.457361487" Mar 18 13:10:30.056407 master-0 kubenswrapper[7146]: I0318 13:10:30.056319 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:10:30.129434 master-0 kubenswrapper[7146]: I0318 13:10:30.129037 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_245f3af1-ccfb-4191-9a34-00852e52a73d/installer/0.log" Mar 18 13:10:30.129434 master-0 kubenswrapper[7146]: I0318 13:10:30.129103 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:10:30.224146 master-0 kubenswrapper[7146]: I0318 13:10:30.224113 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/245f3af1-ccfb-4191-9a34-00852e52a73d-var-lock\") pod \"245f3af1-ccfb-4191-9a34-00852e52a73d\" (UID: \"245f3af1-ccfb-4191-9a34-00852e52a73d\") " Mar 18 13:10:30.224243 master-0 kubenswrapper[7146]: I0318 13:10:30.224168 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/245f3af1-ccfb-4191-9a34-00852e52a73d-kubelet-dir\") pod \"245f3af1-ccfb-4191-9a34-00852e52a73d\" (UID: \"245f3af1-ccfb-4191-9a34-00852e52a73d\") " Mar 18 13:10:30.224243 master-0 kubenswrapper[7146]: I0318 13:10:30.224195 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/245f3af1-ccfb-4191-9a34-00852e52a73d-kube-api-access\") pod \"245f3af1-ccfb-4191-9a34-00852e52a73d\" (UID: \"245f3af1-ccfb-4191-9a34-00852e52a73d\") " Mar 18 13:10:30.224332 master-0 kubenswrapper[7146]: I0318 13:10:30.224256 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/245f3af1-ccfb-4191-9a34-00852e52a73d-var-lock" (OuterVolumeSpecName: "var-lock") pod "245f3af1-ccfb-4191-9a34-00852e52a73d" (UID: "245f3af1-ccfb-4191-9a34-00852e52a73d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:10:30.224376 master-0 kubenswrapper[7146]: I0318 13:10:30.224267 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/245f3af1-ccfb-4191-9a34-00852e52a73d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "245f3af1-ccfb-4191-9a34-00852e52a73d" (UID: "245f3af1-ccfb-4191-9a34-00852e52a73d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:10:30.224495 master-0 kubenswrapper[7146]: I0318 13:10:30.224467 7146 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/245f3af1-ccfb-4191-9a34-00852e52a73d-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:10:30.224536 master-0 kubenswrapper[7146]: I0318 13:10:30.224492 7146 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/245f3af1-ccfb-4191-9a34-00852e52a73d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:10:30.229900 master-0 kubenswrapper[7146]: I0318 13:10:30.229836 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/245f3af1-ccfb-4191-9a34-00852e52a73d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "245f3af1-ccfb-4191-9a34-00852e52a73d" (UID: "245f3af1-ccfb-4191-9a34-00852e52a73d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:10:30.331020 master-0 kubenswrapper[7146]: I0318 13:10:30.326900 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/245f3af1-ccfb-4191-9a34-00852e52a73d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:10:30.433551 master-0 kubenswrapper[7146]: I0318 13:10:30.433484 7146 patch_prober.go:28] interesting pod/route-controller-manager-597f7b4fd-fgxxq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:10:30.434060 master-0 kubenswrapper[7146]: I0318 13:10:30.433575 7146 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" podUID="65cfa12a-0711-4fba-8859-73a3f8f250a9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:10:30.456165 master-0 kubenswrapper[7146]: I0318 13:10:30.456102 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5" event={"ID":"933a37fd-d76a-4f60-8dd8-301fb73daf42","Type":"ContainerStarted","Data":"2442652c47cb11893c3b83d3fad2866d5f95d1a4285de57aa76d8638f0a3ca4c"} Mar 18 13:10:30.457495 master-0 kubenswrapper[7146]: I0318 13:10:30.457444 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_245f3af1-ccfb-4191-9a34-00852e52a73d/installer/0.log" Mar 18 13:10:30.457628 master-0 kubenswrapper[7146]: I0318 13:10:30.457582 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"245f3af1-ccfb-4191-9a34-00852e52a73d","Type":"ContainerDied","Data":"4b5f4bb0323e76ef7cd02a1d41797e05db5442b3a066933557b53fceaffa8ab5"} Mar 18 13:10:30.457685 master-0 kubenswrapper[7146]: I0318 13:10:30.457631 7146 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b5f4bb0323e76ef7cd02a1d41797e05db5442b3a066933557b53fceaffa8ab5" Mar 18 13:10:30.457685 master-0 kubenswrapper[7146]: I0318 13:10:30.457673 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:10:30.472173 master-0 kubenswrapper[7146]: I0318 13:10:30.472112 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5" podStartSLOduration=71.321201898 podStartE2EDuration="1m13.472090195s" podCreationTimestamp="2026-03-18 13:09:17 +0000 UTC" firstStartedPulling="2026-03-18 13:10:27.998457642 +0000 UTC m=+136.806675003" lastFinishedPulling="2026-03-18 13:10:30.149345939 +0000 UTC m=+138.957563300" observedRunningTime="2026-03-18 13:10:30.470307084 +0000 UTC m=+139.278524445" watchObservedRunningTime="2026-03-18 13:10:30.472090195 +0000 UTC m=+139.280307556" Mar 18 13:10:31.457956 master-0 kubenswrapper[7146]: I0318 13:10:31.457859 7146 patch_prober.go:28] interesting pod/route-controller-manager-597f7b4fd-fgxxq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:10:31.458636 master-0 kubenswrapper[7146]: I0318 13:10:31.458139 7146 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" podUID="65cfa12a-0711-4fba-8859-73a3f8f250a9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:10:32.463589 master-0 kubenswrapper[7146]: I0318 13:10:32.463490 7146 patch_prober.go:28] interesting pod/route-controller-manager-597f7b4fd-fgxxq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:10:32.463589 master-0 kubenswrapper[7146]: I0318 13:10:32.463560 7146 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" podUID="65cfa12a-0711-4fba-8859-73a3f8f250a9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:10:33.140377 master-0 kubenswrapper[7146]: I0318 13:10:33.140277 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 13:10:33.536745 master-0 kubenswrapper[7146]: E0318 13:10:33.536668 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 18 13:10:33.801022 master-0 kubenswrapper[7146]: I0318 13:10:33.800894 7146 patch_prober.go:28] interesting pod/etcd-operator-8544cbcf9c-hmbpl container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 18 13:10:33.801022 master-0 kubenswrapper[7146]: I0318 13:10:33.800980 7146 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" podUID="1bf0ea4e-8b08-488f-b252-39580f46b756" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 18 13:10:33.862272 master-0 kubenswrapper[7146]: I0318 13:10:33.862186 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 13:10:33.882470 master-0 kubenswrapper[7146]: I0318 13:10:33.882418 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 13:10:33.957288 master-0 kubenswrapper[7146]: I0318 13:10:33.957220 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.957201116 podStartE2EDuration="957.201116ms" podCreationTimestamp="2026-03-18 13:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:10:33.954766197 +0000 UTC m=+142.762983558" watchObservedRunningTime="2026-03-18 13:10:33.957201116 +0000 UTC m=+142.765418477" Mar 18 13:10:34.443778 master-0 kubenswrapper[7146]: I0318 13:10:34.443692 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:10:34.449676 master-0 kubenswrapper[7146]: I0318 13:10:34.449612 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:10:34.493006 master-0 kubenswrapper[7146]: I0318 13:10:34.492887 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 13:10:35.484773 master-0 kubenswrapper[7146]: I0318 13:10:35.484728 7146 generic.go:334] "Generic (PLEG): container finished" podID="330df925-8429-4b96-9bfe-caa017c21afa" containerID="620704d7c61dd7667c0b9ebbc637d5a4615acb926bb8c0bad681bcafb14bec19" exitCode=0 Mar 18 13:10:35.485310 master-0 kubenswrapper[7146]: I0318 13:10:35.484834 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" event={"ID":"330df925-8429-4b96-9bfe-caa017c21afa","Type":"ContainerDied","Data":"620704d7c61dd7667c0b9ebbc637d5a4615acb926bb8c0bad681bcafb14bec19"} Mar 18 13:10:35.485618 master-0 kubenswrapper[7146]: I0318 13:10:35.485592 7146 scope.go:117] "RemoveContainer" containerID="620704d7c61dd7667c0b9ebbc637d5a4615acb926bb8c0bad681bcafb14bec19" Mar 18 13:10:36.493148 master-0 kubenswrapper[7146]: I0318 13:10:36.493057 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" event={"ID":"330df925-8429-4b96-9bfe-caa017c21afa","Type":"ContainerStarted","Data":"25a6724684f01c1f8f810c77d2f577ea86053b8875f39a3ebd8958705d59785e"} Mar 18 13:10:36.494311 master-0 kubenswrapper[7146]: I0318 13:10:36.494229 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:10:36.497710 master-0 kubenswrapper[7146]: I0318 13:10:36.497657 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:10:36.743320 master-0 kubenswrapper[7146]: I0318 13:10:36.743183 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:10:40.060540 master-0 kubenswrapper[7146]: I0318 13:10:40.060500 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:10:48.737838 master-0 kubenswrapper[7146]: I0318 13:10:48.737801 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/0.log" Mar 18 13:10:48.738708 master-0 kubenswrapper[7146]: I0318 13:10:48.738674 7146 generic.go:334] "Generic (PLEG): container finished" podID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" containerID="bd4c65659cdaf88672c351e368deda39b10476e44f4e0b79ea5e5dab975cb22c" exitCode=1 Mar 18 13:10:48.738815 master-0 kubenswrapper[7146]: I0318 13:10:48.738776 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" event={"ID":"f2b92a53-0b61-4e1d-a306-f9a498e48b38","Type":"ContainerDied","Data":"bd4c65659cdaf88672c351e368deda39b10476e44f4e0b79ea5e5dab975cb22c"} Mar 18 13:10:48.739442 master-0 kubenswrapper[7146]: I0318 13:10:48.739422 7146 scope.go:117] "RemoveContainer" containerID="bd4c65659cdaf88672c351e368deda39b10476e44f4e0b79ea5e5dab975cb22c" Mar 18 13:10:49.745342 master-0 kubenswrapper[7146]: I0318 13:10:49.745297 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/0.log" Mar 18 13:10:49.745342 master-0 kubenswrapper[7146]: I0318 13:10:49.745346 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" event={"ID":"f2b92a53-0b61-4e1d-a306-f9a498e48b38","Type":"ContainerStarted","Data":"737b35288b477956960fa12cc79eb83b193b7b471646ce5af1a3aaef15a0e026"} Mar 18 13:10:55.044667 master-0 kubenswrapper[7146]: I0318 13:10:55.044611 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw"] Mar 18 13:10:55.045711 master-0 kubenswrapper[7146]: E0318 13:10:55.045686 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6fab6cf-3b8f-47a6-837a-319049f487e3" containerName="installer" Mar 18 13:10:55.045819 master-0 kubenswrapper[7146]: I0318 13:10:55.045804 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6fab6cf-3b8f-47a6-837a-319049f487e3" containerName="installer" Mar 18 13:10:55.045919 master-0 kubenswrapper[7146]: E0318 13:10:55.045904 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88cd8323-8857-41fe-85d4-e6064330ec71" containerName="installer" Mar 18 13:10:55.046024 master-0 kubenswrapper[7146]: I0318 13:10:55.046009 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="88cd8323-8857-41fe-85d4-e6064330ec71" containerName="installer" Mar 18 13:10:55.046112 master-0 kubenswrapper[7146]: E0318 13:10:55.046097 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d33ac9-1471-47c5-802c-c267e7c1694f" containerName="installer" Mar 18 13:10:55.046197 master-0 kubenswrapper[7146]: I0318 13:10:55.046184 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d33ac9-1471-47c5-802c-c267e7c1694f" containerName="installer" Mar 18 13:10:55.046275 master-0 kubenswrapper[7146]: E0318 13:10:55.046262 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f32b4d4d-df54-4fa7-a940-297e064fea44" containerName="installer" Mar 18 13:10:55.046370 master-0 kubenswrapper[7146]: I0318 13:10:55.046357 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="f32b4d4d-df54-4fa7-a940-297e064fea44" containerName="installer" Mar 18 13:10:55.046448 master-0 kubenswrapper[7146]: E0318 13:10:55.046435 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="245f3af1-ccfb-4191-9a34-00852e52a73d" containerName="installer" Mar 18 13:10:55.046527 master-0 kubenswrapper[7146]: I0318 13:10:55.046514 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="245f3af1-ccfb-4191-9a34-00852e52a73d" containerName="installer" Mar 18 13:10:55.046713 master-0 kubenswrapper[7146]: I0318 13:10:55.046697 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="88cd8323-8857-41fe-85d4-e6064330ec71" containerName="installer" Mar 18 13:10:55.046819 master-0 kubenswrapper[7146]: I0318 13:10:55.046805 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="245f3af1-ccfb-4191-9a34-00852e52a73d" containerName="installer" Mar 18 13:10:55.046893 master-0 kubenswrapper[7146]: I0318 13:10:55.046882 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6fab6cf-3b8f-47a6-837a-319049f487e3" containerName="installer" Mar 18 13:10:55.047808 master-0 kubenswrapper[7146]: I0318 13:10:55.047776 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="f32b4d4d-df54-4fa7-a940-297e064fea44" containerName="installer" Mar 18 13:10:55.047927 master-0 kubenswrapper[7146]: I0318 13:10:55.047914 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d33ac9-1471-47c5-802c-c267e7c1694f" containerName="installer" Mar 18 13:10:55.048752 master-0 kubenswrapper[7146]: I0318 13:10:55.048683 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w"] Mar 18 13:10:55.049007 master-0 kubenswrapper[7146]: I0318 13:10:55.048960 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw" Mar 18 13:10:55.050276 master-0 kubenswrapper[7146]: I0318 13:10:55.050256 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:10:55.060015 master-0 kubenswrapper[7146]: I0318 13:10:55.056468 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc"] Mar 18 13:10:55.060015 master-0 kubenswrapper[7146]: I0318 13:10:55.057480 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc" Mar 18 13:10:55.060015 master-0 kubenswrapper[7146]: I0318 13:10:55.058734 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-ntbvj" Mar 18 13:10:55.060247 master-0 kubenswrapper[7146]: I0318 13:10:55.060115 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-579bw" Mar 18 13:10:55.060437 master-0 kubenswrapper[7146]: I0318 13:10:55.060329 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 13:10:55.060437 master-0 kubenswrapper[7146]: I0318 13:10:55.060414 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 13:10:55.060680 master-0 kubenswrapper[7146]: I0318 13:10:55.060661 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 13:10:55.060721 master-0 kubenswrapper[7146]: I0318 13:10:55.060700 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 13:10:55.060909 master-0 kubenswrapper[7146]: I0318 13:10:55.060872 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6"] Mar 18 13:10:55.061475 master-0 kubenswrapper[7146]: I0318 13:10:55.061453 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 13:10:55.061762 master-0 kubenswrapper[7146]: I0318 13:10:55.061741 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" Mar 18 13:10:55.067000 master-0 kubenswrapper[7146]: I0318 13:10:55.066966 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 13:10:55.067192 master-0 kubenswrapper[7146]: I0318 13:10:55.067154 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-6l8l5" Mar 18 13:10:55.067520 master-0 kubenswrapper[7146]: I0318 13:10:55.067480 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 13:10:55.080379 master-0 kubenswrapper[7146]: I0318 13:10:55.080327 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj"] Mar 18 13:10:55.081515 master-0 kubenswrapper[7146]: I0318 13:10:55.081487 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:10:55.082558 master-0 kubenswrapper[7146]: I0318 13:10:55.082513 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 13:10:55.082805 master-0 kubenswrapper[7146]: I0318 13:10:55.082777 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-crbnv" Mar 18 13:10:55.082842 master-0 kubenswrapper[7146]: I0318 13:10:55.082819 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 13:10:55.082928 master-0 kubenswrapper[7146]: I0318 13:10:55.082901 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 13:10:55.083046 master-0 kubenswrapper[7146]: I0318 13:10:55.083019 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 13:10:55.083191 master-0 kubenswrapper[7146]: I0318 13:10:55.083161 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 13:10:55.083607 master-0 kubenswrapper[7146]: I0318 13:10:55.083552 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 13:10:55.088721 master-0 kubenswrapper[7146]: I0318 13:10:55.088671 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 13:10:55.089338 master-0 kubenswrapper[7146]: I0318 13:10:55.088907 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 13:10:55.089338 master-0 kubenswrapper[7146]: I0318 13:10:55.089091 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 13:10:55.089338 master-0 kubenswrapper[7146]: I0318 13:10:55.089244 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:10:55.089338 master-0 kubenswrapper[7146]: I0318 13:10:55.089295 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:10:55.089459 master-0 kubenswrapper[7146]: I0318 13:10:55.089381 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lsr6r" Mar 18 13:10:55.108244 master-0 kubenswrapper[7146]: I0318 13:10:55.108185 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w"] Mar 18 13:10:55.112021 master-0 kubenswrapper[7146]: I0318 13:10:55.111980 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc"] Mar 18 13:10:55.112402 master-0 kubenswrapper[7146]: I0318 13:10:55.112368 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a951627-c032-4846-821c-c4bcbf4a91b9-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-92zqc\" (UID: \"7a951627-c032-4846-821c-c4bcbf4a91b9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc" Mar 18 13:10:55.112478 master-0 kubenswrapper[7146]: I0318 13:10:55.112405 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/24935b14-2768-435e-8ed1-73ecac4e05d8-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-nkkt6\" (UID: \"24935b14-2768-435e-8ed1-73ecac4e05d8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" Mar 18 13:10:55.112478 master-0 kubenswrapper[7146]: I0318 13:10:55.112426 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48t2p\" (UniqueName: \"kubernetes.io/projected/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-kube-api-access-48t2p\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-dnztj\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:10:55.112478 master-0 kubenswrapper[7146]: I0318 13:10:55.112448 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-dnztj\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:10:55.112478 master-0 kubenswrapper[7146]: I0318 13:10:55.112466 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxn4v\" (UniqueName: \"kubernetes.io/projected/7a951627-c032-4846-821c-c4bcbf4a91b9-kube-api-access-wxn4v\") pod \"cluster-storage-operator-7d87854d6-92zqc\" (UID: \"7a951627-c032-4846-821c-c4bcbf4a91b9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc" Mar 18 13:10:55.112737 master-0 kubenswrapper[7146]: I0318 13:10:55.112484 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-dnztj\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:10:55.112737 master-0 kubenswrapper[7146]: I0318 13:10:55.112504 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-9nw6w\" (UID: \"7fa6920b-f7d9-4758-bba9-356a2c8b1b67\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:10:55.112737 master-0 kubenswrapper[7146]: I0318 13:10:55.112520 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24935b14-2768-435e-8ed1-73ecac4e05d8-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-nkkt6\" (UID: \"24935b14-2768-435e-8ed1-73ecac4e05d8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" Mar 18 13:10:55.112737 master-0 kubenswrapper[7146]: I0318 13:10:55.112534 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-dnztj\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:10:55.112737 master-0 kubenswrapper[7146]: I0318 13:10:55.112550 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9jhr\" (UniqueName: \"kubernetes.io/projected/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-kube-api-access-w9jhr\") pod \"cloud-credential-operator-744f9dbf77-9nw6w\" (UID: \"7fa6920b-f7d9-4758-bba9-356a2c8b1b67\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:10:55.112737 master-0 kubenswrapper[7146]: I0318 13:10:55.112570 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fljgc\" (UniqueName: \"kubernetes.io/projected/24935b14-2768-435e-8ed1-73ecac4e05d8-kube-api-access-fljgc\") pod \"machine-approver-6cb57bb5db-nkkt6\" (UID: \"24935b14-2768-435e-8ed1-73ecac4e05d8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" Mar 18 13:10:55.112737 master-0 kubenswrapper[7146]: I0318 13:10:55.112591 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e309570-09d0-412a-a74b-c5397d048a30-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-jjdvw\" (UID: \"7e309570-09d0-412a-a74b-c5397d048a30\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw" Mar 18 13:10:55.112737 master-0 kubenswrapper[7146]: I0318 13:10:55.112611 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcfq7\" (UniqueName: \"kubernetes.io/projected/7e309570-09d0-412a-a74b-c5397d048a30-kube-api-access-mcfq7\") pod \"cluster-samples-operator-85f7577d78-jjdvw\" (UID: \"7e309570-09d0-412a-a74b-c5397d048a30\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw" Mar 18 13:10:55.112737 master-0 kubenswrapper[7146]: I0318 13:10:55.112633 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24935b14-2768-435e-8ed1-73ecac4e05d8-config\") pod \"machine-approver-6cb57bb5db-nkkt6\" (UID: \"24935b14-2768-435e-8ed1-73ecac4e05d8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" Mar 18 13:10:55.112737 master-0 kubenswrapper[7146]: I0318 13:10:55.112653 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-9nw6w\" (UID: \"7fa6920b-f7d9-4758-bba9-356a2c8b1b67\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:10:55.112737 master-0 kubenswrapper[7146]: I0318 13:10:55.112673 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-dnztj\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:10:55.120186 master-0 kubenswrapper[7146]: I0318 13:10:55.120120 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw"] Mar 18 13:10:55.123528 master-0 kubenswrapper[7146]: I0318 13:10:55.123493 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr"] Mar 18 13:10:55.124521 master-0 kubenswrapper[7146]: I0318 13:10:55.124499 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:10:55.126579 master-0 kubenswrapper[7146]: I0318 13:10:55.126318 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 13:10:55.126579 master-0 kubenswrapper[7146]: I0318 13:10:55.126450 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-8zrbw" Mar 18 13:10:55.133208 master-0 kubenswrapper[7146]: I0318 13:10:55.133136 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 13:10:55.135841 master-0 kubenswrapper[7146]: I0318 13:10:55.135813 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr"] Mar 18 13:10:55.214427 master-0 kubenswrapper[7146]: I0318 13:10:55.214051 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e309570-09d0-412a-a74b-c5397d048a30-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-jjdvw\" (UID: \"7e309570-09d0-412a-a74b-c5397d048a30\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw" Mar 18 13:10:55.214427 master-0 kubenswrapper[7146]: I0318 13:10:55.214115 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcfq7\" (UniqueName: \"kubernetes.io/projected/7e309570-09d0-412a-a74b-c5397d048a30-kube-api-access-mcfq7\") pod \"cluster-samples-operator-85f7577d78-jjdvw\" (UID: \"7e309570-09d0-412a-a74b-c5397d048a30\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw" Mar 18 13:10:55.214427 master-0 kubenswrapper[7146]: I0318 13:10:55.214148 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24935b14-2768-435e-8ed1-73ecac4e05d8-config\") pod \"machine-approver-6cb57bb5db-nkkt6\" (UID: \"24935b14-2768-435e-8ed1-73ecac4e05d8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" Mar 18 13:10:55.214427 master-0 kubenswrapper[7146]: I0318 13:10:55.214177 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-9nw6w\" (UID: \"7fa6920b-f7d9-4758-bba9-356a2c8b1b67\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:10:55.214427 master-0 kubenswrapper[7146]: I0318 13:10:55.214209 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-dnztj\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:10:55.214427 master-0 kubenswrapper[7146]: I0318 13:10:55.214245 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5475b\" (UniqueName: \"kubernetes.io/projected/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-kube-api-access-5475b\") pod \"cluster-autoscaler-operator-866dc4744-q8vxr\" (UID: \"bd033b5b-af07-4e69-9a5c-46f7c9bde95a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:10:55.214427 master-0 kubenswrapper[7146]: I0318 13:10:55.214284 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a951627-c032-4846-821c-c4bcbf4a91b9-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-92zqc\" (UID: \"7a951627-c032-4846-821c-c4bcbf4a91b9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc" Mar 18 13:10:55.214427 master-0 kubenswrapper[7146]: I0318 13:10:55.214314 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-cert\") pod \"cluster-autoscaler-operator-866dc4744-q8vxr\" (UID: \"bd033b5b-af07-4e69-9a5c-46f7c9bde95a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:10:55.214427 master-0 kubenswrapper[7146]: I0318 13:10:55.214339 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-q8vxr\" (UID: \"bd033b5b-af07-4e69-9a5c-46f7c9bde95a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:10:55.214427 master-0 kubenswrapper[7146]: I0318 13:10:55.214367 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/24935b14-2768-435e-8ed1-73ecac4e05d8-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-nkkt6\" (UID: \"24935b14-2768-435e-8ed1-73ecac4e05d8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" Mar 18 13:10:55.214427 master-0 kubenswrapper[7146]: I0318 13:10:55.214392 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48t2p\" (UniqueName: \"kubernetes.io/projected/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-kube-api-access-48t2p\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-dnztj\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:10:55.214427 master-0 kubenswrapper[7146]: I0318 13:10:55.214414 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-dnztj\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:10:55.214427 master-0 kubenswrapper[7146]: I0318 13:10:55.214437 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxn4v\" (UniqueName: \"kubernetes.io/projected/7a951627-c032-4846-821c-c4bcbf4a91b9-kube-api-access-wxn4v\") pod \"cluster-storage-operator-7d87854d6-92zqc\" (UID: \"7a951627-c032-4846-821c-c4bcbf4a91b9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc" Mar 18 13:10:55.214967 master-0 kubenswrapper[7146]: I0318 13:10:55.214462 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-dnztj\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:10:55.214967 master-0 kubenswrapper[7146]: I0318 13:10:55.214491 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-9nw6w\" (UID: \"7fa6920b-f7d9-4758-bba9-356a2c8b1b67\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:10:55.214967 master-0 kubenswrapper[7146]: I0318 13:10:55.214513 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24935b14-2768-435e-8ed1-73ecac4e05d8-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-nkkt6\" (UID: \"24935b14-2768-435e-8ed1-73ecac4e05d8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" Mar 18 13:10:55.214967 master-0 kubenswrapper[7146]: I0318 13:10:55.214538 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-dnztj\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:10:55.214967 master-0 kubenswrapper[7146]: I0318 13:10:55.214564 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9jhr\" (UniqueName: \"kubernetes.io/projected/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-kube-api-access-w9jhr\") pod \"cloud-credential-operator-744f9dbf77-9nw6w\" (UID: \"7fa6920b-f7d9-4758-bba9-356a2c8b1b67\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:10:55.214967 master-0 kubenswrapper[7146]: I0318 13:10:55.214591 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fljgc\" (UniqueName: \"kubernetes.io/projected/24935b14-2768-435e-8ed1-73ecac4e05d8-kube-api-access-fljgc\") pod \"machine-approver-6cb57bb5db-nkkt6\" (UID: \"24935b14-2768-435e-8ed1-73ecac4e05d8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" Mar 18 13:10:55.216217 master-0 kubenswrapper[7146]: I0318 13:10:55.216131 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-dnztj\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:10:55.216852 master-0 kubenswrapper[7146]: I0318 13:10:55.216822 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24935b14-2768-435e-8ed1-73ecac4e05d8-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-nkkt6\" (UID: \"24935b14-2768-435e-8ed1-73ecac4e05d8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" Mar 18 13:10:55.219154 master-0 kubenswrapper[7146]: I0318 13:10:55.217905 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-dnztj\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:10:55.219154 master-0 kubenswrapper[7146]: I0318 13:10:55.218754 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-9nw6w\" (UID: \"7fa6920b-f7d9-4758-bba9-356a2c8b1b67\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:10:55.221683 master-0 kubenswrapper[7146]: I0318 13:10:55.221635 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e309570-09d0-412a-a74b-c5397d048a30-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-jjdvw\" (UID: \"7e309570-09d0-412a-a74b-c5397d048a30\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw" Mar 18 13:10:55.224263 master-0 kubenswrapper[7146]: I0318 13:10:55.224019 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a951627-c032-4846-821c-c4bcbf4a91b9-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-92zqc\" (UID: \"7a951627-c032-4846-821c-c4bcbf4a91b9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc" Mar 18 13:10:55.224484 master-0 kubenswrapper[7146]: I0318 13:10:55.224441 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-9nw6w\" (UID: \"7fa6920b-f7d9-4758-bba9-356a2c8b1b67\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:10:55.226197 master-0 kubenswrapper[7146]: I0318 13:10:55.225682 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/24935b14-2768-435e-8ed1-73ecac4e05d8-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-nkkt6\" (UID: \"24935b14-2768-435e-8ed1-73ecac4e05d8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" Mar 18 13:10:55.226197 master-0 kubenswrapper[7146]: I0318 13:10:55.226020 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-dnztj\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:10:55.226197 master-0 kubenswrapper[7146]: I0318 13:10:55.226157 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24935b14-2768-435e-8ed1-73ecac4e05d8-config\") pod \"machine-approver-6cb57bb5db-nkkt6\" (UID: \"24935b14-2768-435e-8ed1-73ecac4e05d8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" Mar 18 13:10:55.231186 master-0 kubenswrapper[7146]: I0318 13:10:55.231140 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-dnztj\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:10:55.241867 master-0 kubenswrapper[7146]: E0318 13:10:55.241821 7146 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a951627_c032_4846_821c_c4bcbf4a91b9.slice\": RecentStats: unable to find data in memory cache]" Mar 18 13:10:55.245563 master-0 kubenswrapper[7146]: I0318 13:10:55.245520 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fljgc\" (UniqueName: \"kubernetes.io/projected/24935b14-2768-435e-8ed1-73ecac4e05d8-kube-api-access-fljgc\") pod \"machine-approver-6cb57bb5db-nkkt6\" (UID: \"24935b14-2768-435e-8ed1-73ecac4e05d8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" Mar 18 13:10:55.250764 master-0 kubenswrapper[7146]: I0318 13:10:55.250718 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9jhr\" (UniqueName: \"kubernetes.io/projected/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-kube-api-access-w9jhr\") pod \"cloud-credential-operator-744f9dbf77-9nw6w\" (UID: \"7fa6920b-f7d9-4758-bba9-356a2c8b1b67\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:10:55.253691 master-0 kubenswrapper[7146]: I0318 13:10:55.253648 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48t2p\" (UniqueName: \"kubernetes.io/projected/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-kube-api-access-48t2p\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-dnztj\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:10:55.273217 master-0 kubenswrapper[7146]: I0318 13:10:55.268716 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcfq7\" (UniqueName: \"kubernetes.io/projected/7e309570-09d0-412a-a74b-c5397d048a30-kube-api-access-mcfq7\") pod \"cluster-samples-operator-85f7577d78-jjdvw\" (UID: \"7e309570-09d0-412a-a74b-c5397d048a30\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw" Mar 18 13:10:55.310512 master-0 kubenswrapper[7146]: I0318 13:10:55.309612 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxn4v\" (UniqueName: \"kubernetes.io/projected/7a951627-c032-4846-821c-c4bcbf4a91b9-kube-api-access-wxn4v\") pod \"cluster-storage-operator-7d87854d6-92zqc\" (UID: \"7a951627-c032-4846-821c-c4bcbf4a91b9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc" Mar 18 13:10:55.316134 master-0 kubenswrapper[7146]: I0318 13:10:55.316065 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5475b\" (UniqueName: \"kubernetes.io/projected/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-kube-api-access-5475b\") pod \"cluster-autoscaler-operator-866dc4744-q8vxr\" (UID: \"bd033b5b-af07-4e69-9a5c-46f7c9bde95a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:10:55.316321 master-0 kubenswrapper[7146]: I0318 13:10:55.316155 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-cert\") pod \"cluster-autoscaler-operator-866dc4744-q8vxr\" (UID: \"bd033b5b-af07-4e69-9a5c-46f7c9bde95a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:10:55.316321 master-0 kubenswrapper[7146]: I0318 13:10:55.316188 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-q8vxr\" (UID: \"bd033b5b-af07-4e69-9a5c-46f7c9bde95a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:10:55.317181 master-0 kubenswrapper[7146]: I0318 13:10:55.317152 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-q8vxr\" (UID: \"bd033b5b-af07-4e69-9a5c-46f7c9bde95a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:10:55.321002 master-0 kubenswrapper[7146]: I0318 13:10:55.320966 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-cert\") pod \"cluster-autoscaler-operator-866dc4744-q8vxr\" (UID: \"bd033b5b-af07-4e69-9a5c-46f7c9bde95a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:10:55.358893 master-0 kubenswrapper[7146]: I0318 13:10:55.358846 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5475b\" (UniqueName: \"kubernetes.io/projected/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-kube-api-access-5475b\") pod \"cluster-autoscaler-operator-866dc4744-q8vxr\" (UID: \"bd033b5b-af07-4e69-9a5c-46f7c9bde95a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:10:55.411262 master-0 kubenswrapper[7146]: I0318 13:10:55.410287 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw" Mar 18 13:10:55.432677 master-0 kubenswrapper[7146]: I0318 13:10:55.432610 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:10:55.460103 master-0 kubenswrapper[7146]: I0318 13:10:55.459953 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc" Mar 18 13:10:55.480960 master-0 kubenswrapper[7146]: I0318 13:10:55.479956 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" Mar 18 13:10:55.496996 master-0 kubenswrapper[7146]: I0318 13:10:55.496914 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:10:55.508757 master-0 kubenswrapper[7146]: I0318 13:10:55.508115 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:10:55.775606 master-0 kubenswrapper[7146]: I0318 13:10:55.775540 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" event={"ID":"24935b14-2768-435e-8ed1-73ecac4e05d8","Type":"ContainerStarted","Data":"90b5f2bab5d48d375ec84dcad33a3cefcbba375c32cba3bc75e2670a6864dd98"} Mar 18 13:10:55.777631 master-0 kubenswrapper[7146]: I0318 13:10:55.777601 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" event={"ID":"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f","Type":"ContainerStarted","Data":"d6aa70225229cd8d076f3c277c4695c96efe66c480ed25e342a99d26cce5aa22"} Mar 18 13:10:55.905576 master-0 kubenswrapper[7146]: I0318 13:10:55.905524 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w"] Mar 18 13:10:55.989535 master-0 kubenswrapper[7146]: I0318 13:10:55.987438 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw"] Mar 18 13:10:55.991696 master-0 kubenswrapper[7146]: I0318 13:10:55.991650 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc"] Mar 18 13:10:56.070067 master-0 kubenswrapper[7146]: I0318 13:10:56.070019 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr"] Mar 18 13:10:56.076175 master-0 kubenswrapper[7146]: W0318 13:10:56.076138 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd033b5b_af07_4e69_9a5c_46f7c9bde95a.slice/crio-169cee91f89c2bf08a085d418adf6f39cab225d960227b563e10d5f8629dd9c5 WatchSource:0}: Error finding container 169cee91f89c2bf08a085d418adf6f39cab225d960227b563e10d5f8629dd9c5: Status 404 returned error can't find the container with id 169cee91f89c2bf08a085d418adf6f39cab225d960227b563e10d5f8629dd9c5 Mar 18 13:10:56.786377 master-0 kubenswrapper[7146]: I0318 13:10:56.786313 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" event={"ID":"bd033b5b-af07-4e69-9a5c-46f7c9bde95a","Type":"ContainerStarted","Data":"75e088da8d481b5bb2c284fab773c318aa5ff4cbc963b47f7987a8bf5299e322"} Mar 18 13:10:56.786377 master-0 kubenswrapper[7146]: I0318 13:10:56.786374 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" event={"ID":"bd033b5b-af07-4e69-9a5c-46f7c9bde95a","Type":"ContainerStarted","Data":"169cee91f89c2bf08a085d418adf6f39cab225d960227b563e10d5f8629dd9c5"} Mar 18 13:10:56.789188 master-0 kubenswrapper[7146]: I0318 13:10:56.788857 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw" event={"ID":"7e309570-09d0-412a-a74b-c5397d048a30","Type":"ContainerStarted","Data":"93c3e972c1d72b8d1ee15395999be03050512e051706f9a30dccebe0b0487b51"} Mar 18 13:10:56.790074 master-0 kubenswrapper[7146]: I0318 13:10:56.790043 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc" event={"ID":"7a951627-c032-4846-821c-c4bcbf4a91b9","Type":"ContainerStarted","Data":"6c7a102b9c64081966ad588bf6d34058c0849b6b42caa6a8951b5cab3df0847b"} Mar 18 13:10:56.791969 master-0 kubenswrapper[7146]: I0318 13:10:56.791912 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" event={"ID":"24935b14-2768-435e-8ed1-73ecac4e05d8","Type":"ContainerStarted","Data":"5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b"} Mar 18 13:10:56.794529 master-0 kubenswrapper[7146]: I0318 13:10:56.794473 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" event={"ID":"7fa6920b-f7d9-4758-bba9-356a2c8b1b67","Type":"ContainerStarted","Data":"85f2314cebb2f3fff04724c5a8886f41a66b250f9e8445a6a46906e189d12226"} Mar 18 13:10:56.794600 master-0 kubenswrapper[7146]: I0318 13:10:56.794532 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" event={"ID":"7fa6920b-f7d9-4758-bba9-356a2c8b1b67","Type":"ContainerStarted","Data":"c7de43cf6bf0c5d7b2b878ebc5990ddb62b5d5e375bde178cb4882acdf2057b0"} Mar 18 13:10:59.822227 master-0 kubenswrapper[7146]: I0318 13:10:59.822171 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" event={"ID":"bd033b5b-af07-4e69-9a5c-46f7c9bde95a","Type":"ContainerStarted","Data":"e20cb392c2151c9b567d2f9cb92d9caffc6ffa0a0c94ec6c22fe2417cecc2fef"} Mar 18 13:10:59.827326 master-0 kubenswrapper[7146]: I0318 13:10:59.826709 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" event={"ID":"24935b14-2768-435e-8ed1-73ecac4e05d8","Type":"ContainerStarted","Data":"37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb"} Mar 18 13:10:59.845695 master-0 kubenswrapper[7146]: I0318 13:10:59.845621 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" podStartSLOduration=2.198070865 podStartE2EDuration="4.845607865s" podCreationTimestamp="2026-03-18 13:10:55 +0000 UTC" firstStartedPulling="2026-03-18 13:10:56.169283024 +0000 UTC m=+164.977500385" lastFinishedPulling="2026-03-18 13:10:58.816820024 +0000 UTC m=+167.625037385" observedRunningTime="2026-03-18 13:10:59.843881266 +0000 UTC m=+168.652098647" watchObservedRunningTime="2026-03-18 13:10:59.845607865 +0000 UTC m=+168.653825226" Mar 18 13:11:00.575998 master-0 kubenswrapper[7146]: I0318 13:11:00.575899 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" podStartSLOduration=2.607670991 podStartE2EDuration="5.575877257s" podCreationTimestamp="2026-03-18 13:10:55 +0000 UTC" firstStartedPulling="2026-03-18 13:10:55.844339897 +0000 UTC m=+164.652557258" lastFinishedPulling="2026-03-18 13:10:58.812546163 +0000 UTC m=+167.620763524" observedRunningTime="2026-03-18 13:11:00.573599093 +0000 UTC m=+169.381816454" watchObservedRunningTime="2026-03-18 13:11:00.575877257 +0000 UTC m=+169.384094618" Mar 18 13:11:01.130063 master-0 kubenswrapper[7146]: I0318 13:11:01.130007 7146 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 13:11:01.130678 master-0 kubenswrapper[7146]: I0318 13:11:01.130242 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" containerID="cri-o://e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333" gracePeriod=30 Mar 18 13:11:01.130678 master-0 kubenswrapper[7146]: I0318 13:11:01.130362 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://893603396e8e800b4788d5b94c9dd4a16cacde6fac87a290be4cef783a9a8d41" gracePeriod=30 Mar 18 13:11:01.138478 master-0 kubenswrapper[7146]: I0318 13:11:01.138040 7146 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:11:01.138478 master-0 kubenswrapper[7146]: E0318 13:11:01.138368 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 13:11:01.138478 master-0 kubenswrapper[7146]: I0318 13:11:01.138385 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 13:11:01.138478 master-0 kubenswrapper[7146]: E0318 13:11:01.138403 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:01.138478 master-0 kubenswrapper[7146]: I0318 13:11:01.138415 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:01.138478 master-0 kubenswrapper[7146]: E0318 13:11:01.138427 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:01.138478 master-0 kubenswrapper[7146]: I0318 13:11:01.138437 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:01.139443 master-0 kubenswrapper[7146]: I0318 13:11:01.138753 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 13:11:01.139443 master-0 kubenswrapper[7146]: I0318 13:11:01.138785 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:01.139443 master-0 kubenswrapper[7146]: I0318 13:11:01.138837 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:01.139443 master-0 kubenswrapper[7146]: E0318 13:11:01.139194 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:01.139443 master-0 kubenswrapper[7146]: I0318 13:11:01.139213 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:01.139443 master-0 kubenswrapper[7146]: E0318 13:11:01.139227 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:01.139443 master-0 kubenswrapper[7146]: I0318 13:11:01.139237 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:01.139443 master-0 kubenswrapper[7146]: I0318 13:11:01.139383 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:01.139443 master-0 kubenswrapper[7146]: I0318 13:11:01.139396 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:01.140427 master-0 kubenswrapper[7146]: I0318 13:11:01.140397 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:01.195286 master-0 kubenswrapper[7146]: I0318 13:11:01.195124 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f88d0f62c0688ab1909dc97f30d381b9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f88d0f62c0688ab1909dc97f30d381b9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:01.195505 master-0 kubenswrapper[7146]: I0318 13:11:01.195305 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f88d0f62c0688ab1909dc97f30d381b9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f88d0f62c0688ab1909dc97f30d381b9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:01.256481 master-0 kubenswrapper[7146]: I0318 13:11:01.256421 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:11:01.296919 master-0 kubenswrapper[7146]: I0318 13:11:01.296829 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f88d0f62c0688ab1909dc97f30d381b9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f88d0f62c0688ab1909dc97f30d381b9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:01.296919 master-0 kubenswrapper[7146]: I0318 13:11:01.296970 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f88d0f62c0688ab1909dc97f30d381b9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f88d0f62c0688ab1909dc97f30d381b9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:01.297266 master-0 kubenswrapper[7146]: I0318 13:11:01.297021 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f88d0f62c0688ab1909dc97f30d381b9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f88d0f62c0688ab1909dc97f30d381b9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:01.297266 master-0 kubenswrapper[7146]: I0318 13:11:01.297124 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f88d0f62c0688ab1909dc97f30d381b9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f88d0f62c0688ab1909dc97f30d381b9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:01.555107 master-0 kubenswrapper[7146]: I0318 13:11:01.554992 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:01.987455 master-0 kubenswrapper[7146]: W0318 13:11:01.987411 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf88d0f62c0688ab1909dc97f30d381b9.slice/crio-5ba8960ffc8f4d261a29f99e7ea70248d9ec455a8db5fa3f1122f7be93611c4e WatchSource:0}: Error finding container 5ba8960ffc8f4d261a29f99e7ea70248d9ec455a8db5fa3f1122f7be93611c4e: Status 404 returned error can't find the container with id 5ba8960ffc8f4d261a29f99e7ea70248d9ec455a8db5fa3f1122f7be93611c4e Mar 18 13:11:02.019920 master-0 kubenswrapper[7146]: I0318 13:11:02.019891 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:11:02.104280 master-0 kubenswrapper[7146]: I0318 13:11:02.103776 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 13:11:02.104280 master-0 kubenswrapper[7146]: I0318 13:11:02.103842 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 13:11:02.104280 master-0 kubenswrapper[7146]: I0318 13:11:02.103869 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 13:11:02.104280 master-0 kubenswrapper[7146]: I0318 13:11:02.103900 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 13:11:02.104280 master-0 kubenswrapper[7146]: I0318 13:11:02.103926 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 13:11:02.104280 master-0 kubenswrapper[7146]: I0318 13:11:02.104135 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets" (OuterVolumeSpecName: "secrets") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:11:02.104280 master-0 kubenswrapper[7146]: I0318 13:11:02.104208 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:11:02.104280 master-0 kubenswrapper[7146]: I0318 13:11:02.104210 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config" (OuterVolumeSpecName: "config") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:11:02.104280 master-0 kubenswrapper[7146]: I0318 13:11:02.104237 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs" (OuterVolumeSpecName: "logs") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:11:02.104280 master-0 kubenswrapper[7146]: I0318 13:11:02.104256 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:11:02.206838 master-0 kubenswrapper[7146]: I0318 13:11:02.206790 7146 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:02.206838 master-0 kubenswrapper[7146]: I0318 13:11:02.206839 7146 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:02.207389 master-0 kubenswrapper[7146]: I0318 13:11:02.206853 7146 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:02.207389 master-0 kubenswrapper[7146]: I0318 13:11:02.206870 7146 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:02.207389 master-0 kubenswrapper[7146]: I0318 13:11:02.206884 7146 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:02.865988 master-0 kubenswrapper[7146]: I0318 13:11:02.865019 7146 generic.go:334] "Generic (PLEG): container finished" podID="f4d88fc1-4e92-432e-ac2c-e1c489b15e93" containerID="3aecc1592a5c76f7851ff01bf9ec75d38c020718af10663c3a3924f329ae17c6" exitCode=0 Mar 18 13:11:02.865988 master-0 kubenswrapper[7146]: I0318 13:11:02.865102 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"f4d88fc1-4e92-432e-ac2c-e1c489b15e93","Type":"ContainerDied","Data":"3aecc1592a5c76f7851ff01bf9ec75d38c020718af10663c3a3924f329ae17c6"} Mar 18 13:11:02.868500 master-0 kubenswrapper[7146]: I0318 13:11:02.868346 7146 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="893603396e8e800b4788d5b94c9dd4a16cacde6fac87a290be4cef783a9a8d41" exitCode=0 Mar 18 13:11:02.868500 master-0 kubenswrapper[7146]: I0318 13:11:02.868369 7146 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333" exitCode=0 Mar 18 13:11:02.868500 master-0 kubenswrapper[7146]: I0318 13:11:02.868434 7146 scope.go:117] "RemoveContainer" containerID="893603396e8e800b4788d5b94c9dd4a16cacde6fac87a290be4cef783a9a8d41" Mar 18 13:11:02.868632 master-0 kubenswrapper[7146]: I0318 13:11:02.868572 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:11:02.890537 master-0 kubenswrapper[7146]: I0318 13:11:02.890404 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" event={"ID":"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f","Type":"ContainerStarted","Data":"f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65"} Mar 18 13:11:02.890537 master-0 kubenswrapper[7146]: I0318 13:11:02.890469 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" event={"ID":"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f","Type":"ContainerStarted","Data":"bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c"} Mar 18 13:11:02.890537 master-0 kubenswrapper[7146]: I0318 13:11:02.890487 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" event={"ID":"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f","Type":"ContainerStarted","Data":"2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e"} Mar 18 13:11:02.894000 master-0 kubenswrapper[7146]: I0318 13:11:02.893963 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw" event={"ID":"7e309570-09d0-412a-a74b-c5397d048a30","Type":"ContainerStarted","Data":"383a190aa3756150d204ef133ee8dfe4511709ead385bdbee3ac49de64336984"} Mar 18 13:11:02.894161 master-0 kubenswrapper[7146]: I0318 13:11:02.894148 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw" event={"ID":"7e309570-09d0-412a-a74b-c5397d048a30","Type":"ContainerStarted","Data":"e5cc822fc4a12a330b8352c541da7824b7f5d47104fa4ff465dcfc8c614cb880"} Mar 18 13:11:02.922921 master-0 kubenswrapper[7146]: I0318 13:11:02.922876 7146 scope.go:117] "RemoveContainer" containerID="8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38" Mar 18 13:11:02.935481 master-0 kubenswrapper[7146]: I0318 13:11:02.935371 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc" event={"ID":"7a951627-c032-4846-821c-c4bcbf4a91b9","Type":"ContainerStarted","Data":"a8459df9395de9584914bcd1d56690f1b07e1e54842c9ed88467d29011598847"} Mar 18 13:11:02.948247 master-0 kubenswrapper[7146]: I0318 13:11:02.948203 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f88d0f62c0688ab1909dc97f30d381b9","Type":"ContainerStarted","Data":"9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5"} Mar 18 13:11:02.948428 master-0 kubenswrapper[7146]: I0318 13:11:02.948251 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f88d0f62c0688ab1909dc97f30d381b9","Type":"ContainerStarted","Data":"1a3f1cc2c06b3716aaec57cfe182c6cc3f75f423059d28cf0ab2c58cba5e63fc"} Mar 18 13:11:02.948428 master-0 kubenswrapper[7146]: I0318 13:11:02.948266 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f88d0f62c0688ab1909dc97f30d381b9","Type":"ContainerStarted","Data":"af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1"} Mar 18 13:11:02.948428 master-0 kubenswrapper[7146]: I0318 13:11:02.948277 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f88d0f62c0688ab1909dc97f30d381b9","Type":"ContainerStarted","Data":"5ba8960ffc8f4d261a29f99e7ea70248d9ec455a8db5fa3f1122f7be93611c4e"} Mar 18 13:11:02.958555 master-0 kubenswrapper[7146]: I0318 13:11:02.958477 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw" podStartSLOduration=2.10574707 podStartE2EDuration="7.9584582s" podCreationTimestamp="2026-03-18 13:10:55 +0000 UTC" firstStartedPulling="2026-03-18 13:10:56.095796442 +0000 UTC m=+164.904013803" lastFinishedPulling="2026-03-18 13:11:01.948507572 +0000 UTC m=+170.756724933" observedRunningTime="2026-03-18 13:11:02.958139121 +0000 UTC m=+171.766356502" watchObservedRunningTime="2026-03-18 13:11:02.9584582 +0000 UTC m=+171.766675561" Mar 18 13:11:02.958670 master-0 kubenswrapper[7146]: I0318 13:11:02.958595 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" podStartSLOduration=1.488360036 podStartE2EDuration="7.958589464s" podCreationTimestamp="2026-03-18 13:10:55 +0000 UTC" firstStartedPulling="2026-03-18 13:10:55.530345799 +0000 UTC m=+164.338563160" lastFinishedPulling="2026-03-18 13:11:02.000575217 +0000 UTC m=+170.808792588" observedRunningTime="2026-03-18 13:11:02.921779101 +0000 UTC m=+171.729996472" watchObservedRunningTime="2026-03-18 13:11:02.958589464 +0000 UTC m=+171.766806835" Mar 18 13:11:02.981345 master-0 kubenswrapper[7146]: I0318 13:11:02.975097 7146 scope.go:117] "RemoveContainer" containerID="e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333" Mar 18 13:11:03.005742 master-0 kubenswrapper[7146]: I0318 13:11:03.003353 7146 scope.go:117] "RemoveContainer" containerID="893603396e8e800b4788d5b94c9dd4a16cacde6fac87a290be4cef783a9a8d41" Mar 18 13:11:03.008281 master-0 kubenswrapper[7146]: I0318 13:11:03.007601 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc" podStartSLOduration=2.036027605 podStartE2EDuration="8.007577142s" podCreationTimestamp="2026-03-18 13:10:55 +0000 UTC" firstStartedPulling="2026-03-18 13:10:56.011754511 +0000 UTC m=+164.819971872" lastFinishedPulling="2026-03-18 13:11:01.983304048 +0000 UTC m=+170.791521409" observedRunningTime="2026-03-18 13:11:03.006525902 +0000 UTC m=+171.814743263" watchObservedRunningTime="2026-03-18 13:11:03.007577142 +0000 UTC m=+171.815794503" Mar 18 13:11:03.012485 master-0 kubenswrapper[7146]: E0318 13:11:03.012434 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"893603396e8e800b4788d5b94c9dd4a16cacde6fac87a290be4cef783a9a8d41\": container with ID starting with 893603396e8e800b4788d5b94c9dd4a16cacde6fac87a290be4cef783a9a8d41 not found: ID does not exist" containerID="893603396e8e800b4788d5b94c9dd4a16cacde6fac87a290be4cef783a9a8d41" Mar 18 13:11:03.012485 master-0 kubenswrapper[7146]: I0318 13:11:03.012479 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"893603396e8e800b4788d5b94c9dd4a16cacde6fac87a290be4cef783a9a8d41"} err="failed to get container status \"893603396e8e800b4788d5b94c9dd4a16cacde6fac87a290be4cef783a9a8d41\": rpc error: code = NotFound desc = could not find container \"893603396e8e800b4788d5b94c9dd4a16cacde6fac87a290be4cef783a9a8d41\": container with ID starting with 893603396e8e800b4788d5b94c9dd4a16cacde6fac87a290be4cef783a9a8d41 not found: ID does not exist" Mar 18 13:11:03.012697 master-0 kubenswrapper[7146]: I0318 13:11:03.012505 7146 scope.go:117] "RemoveContainer" containerID="8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38" Mar 18 13:11:03.018808 master-0 kubenswrapper[7146]: E0318 13:11:03.017297 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38\": container with ID starting with 8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38 not found: ID does not exist" containerID="8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38" Mar 18 13:11:03.018808 master-0 kubenswrapper[7146]: I0318 13:11:03.017346 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38"} err="failed to get container status \"8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38\": rpc error: code = NotFound desc = could not find container \"8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38\": container with ID starting with 8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38 not found: ID does not exist" Mar 18 13:11:03.018808 master-0 kubenswrapper[7146]: I0318 13:11:03.017375 7146 scope.go:117] "RemoveContainer" containerID="e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333" Mar 18 13:11:03.018808 master-0 kubenswrapper[7146]: E0318 13:11:03.017769 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333\": container with ID starting with e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333 not found: ID does not exist" containerID="e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333" Mar 18 13:11:03.018808 master-0 kubenswrapper[7146]: I0318 13:11:03.017802 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333"} err="failed to get container status \"e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333\": rpc error: code = NotFound desc = could not find container \"e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333\": container with ID starting with e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333 not found: ID does not exist" Mar 18 13:11:03.018808 master-0 kubenswrapper[7146]: I0318 13:11:03.017823 7146 scope.go:117] "RemoveContainer" containerID="893603396e8e800b4788d5b94c9dd4a16cacde6fac87a290be4cef783a9a8d41" Mar 18 13:11:03.018808 master-0 kubenswrapper[7146]: I0318 13:11:03.018149 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"893603396e8e800b4788d5b94c9dd4a16cacde6fac87a290be4cef783a9a8d41"} err="failed to get container status \"893603396e8e800b4788d5b94c9dd4a16cacde6fac87a290be4cef783a9a8d41\": rpc error: code = NotFound desc = could not find container \"893603396e8e800b4788d5b94c9dd4a16cacde6fac87a290be4cef783a9a8d41\": container with ID starting with 893603396e8e800b4788d5b94c9dd4a16cacde6fac87a290be4cef783a9a8d41 not found: ID does not exist" Mar 18 13:11:03.018808 master-0 kubenswrapper[7146]: I0318 13:11:03.018169 7146 scope.go:117] "RemoveContainer" containerID="8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38" Mar 18 13:11:03.018808 master-0 kubenswrapper[7146]: I0318 13:11:03.018397 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38"} err="failed to get container status \"8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38\": rpc error: code = NotFound desc = could not find container \"8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38\": container with ID starting with 8e338c40daeb06dda8a9fe3ae91917410047638164a7e8e794580b572222df38 not found: ID does not exist" Mar 18 13:11:03.018808 master-0 kubenswrapper[7146]: I0318 13:11:03.018415 7146 scope.go:117] "RemoveContainer" containerID="e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333" Mar 18 13:11:03.018808 master-0 kubenswrapper[7146]: I0318 13:11:03.018742 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333"} err="failed to get container status \"e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333\": rpc error: code = NotFound desc = could not find container \"e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333\": container with ID starting with e8315a2214144fa1792d6c47a099e40e46b488bbaa624b6605c19f7100d3a333 not found: ID does not exist" Mar 18 13:11:03.368330 master-0 kubenswrapper[7146]: I0318 13:11:03.368261 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f265536aba6292ead501bc9b49f327" path="/var/lib/kubelet/pods/46f265536aba6292ead501bc9b49f327/volumes" Mar 18 13:11:03.368925 master-0 kubenswrapper[7146]: I0318 13:11:03.368884 7146 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 18 13:11:03.384252 master-0 kubenswrapper[7146]: I0318 13:11:03.384194 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 13:11:03.384252 master-0 kubenswrapper[7146]: I0318 13:11:03.384234 7146 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="ba5b39e9-fca6-4771-8bf5-09f7133e4bd6" Mar 18 13:11:03.387382 master-0 kubenswrapper[7146]: I0318 13:11:03.387350 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 13:11:03.387382 master-0 kubenswrapper[7146]: I0318 13:11:03.387373 7146 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="ba5b39e9-fca6-4771-8bf5-09f7133e4bd6" Mar 18 13:11:03.963421 master-0 kubenswrapper[7146]: I0318 13:11:03.963187 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f88d0f62c0688ab1909dc97f30d381b9","Type":"ContainerStarted","Data":"eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731"} Mar 18 13:11:03.997170 master-0 kubenswrapper[7146]: I0318 13:11:03.997104 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.997087739 podStartE2EDuration="2.997087739s" podCreationTimestamp="2026-03-18 13:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:11:03.995296589 +0000 UTC m=+172.803513950" watchObservedRunningTime="2026-03-18 13:11:03.997087739 +0000 UTC m=+172.805305110" Mar 18 13:11:07.794165 master-0 kubenswrapper[7146]: I0318 13:11:07.794091 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:11:07.898474 master-0 kubenswrapper[7146]: I0318 13:11:07.898396 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-kube-api-access\") pod \"f4d88fc1-4e92-432e-ac2c-e1c489b15e93\" (UID: \"f4d88fc1-4e92-432e-ac2c-e1c489b15e93\") " Mar 18 13:11:07.898791 master-0 kubenswrapper[7146]: I0318 13:11:07.898573 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-kubelet-dir\") pod \"f4d88fc1-4e92-432e-ac2c-e1c489b15e93\" (UID: \"f4d88fc1-4e92-432e-ac2c-e1c489b15e93\") " Mar 18 13:11:07.898791 master-0 kubenswrapper[7146]: I0318 13:11:07.898667 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-var-lock\") pod \"f4d88fc1-4e92-432e-ac2c-e1c489b15e93\" (UID: \"f4d88fc1-4e92-432e-ac2c-e1c489b15e93\") " Mar 18 13:11:07.898791 master-0 kubenswrapper[7146]: I0318 13:11:07.898724 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f4d88fc1-4e92-432e-ac2c-e1c489b15e93" (UID: "f4d88fc1-4e92-432e-ac2c-e1c489b15e93"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:11:07.898922 master-0 kubenswrapper[7146]: I0318 13:11:07.898811 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-var-lock" (OuterVolumeSpecName: "var-lock") pod "f4d88fc1-4e92-432e-ac2c-e1c489b15e93" (UID: "f4d88fc1-4e92-432e-ac2c-e1c489b15e93"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:11:07.899045 master-0 kubenswrapper[7146]: I0318 13:11:07.899003 7146 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:07.899045 master-0 kubenswrapper[7146]: I0318 13:11:07.899035 7146 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:07.901677 master-0 kubenswrapper[7146]: I0318 13:11:07.901599 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f4d88fc1-4e92-432e-ac2c-e1c489b15e93" (UID: "f4d88fc1-4e92-432e-ac2c-e1c489b15e93"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:11:07.995272 master-0 kubenswrapper[7146]: I0318 13:11:07.995210 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"f4d88fc1-4e92-432e-ac2c-e1c489b15e93","Type":"ContainerDied","Data":"4c416409750419b3738641dbf762d8e4ba531250589956be62e2ee0593e39b8a"} Mar 18 13:11:07.995272 master-0 kubenswrapper[7146]: I0318 13:11:07.995258 7146 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c416409750419b3738641dbf762d8e4ba531250589956be62e2ee0593e39b8a" Mar 18 13:11:07.995477 master-0 kubenswrapper[7146]: I0318 13:11:07.995274 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:11:08.000237 master-0 kubenswrapper[7146]: I0318 13:11:08.000184 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4d88fc1-4e92-432e-ac2c-e1c489b15e93-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:09.004354 master-0 kubenswrapper[7146]: I0318 13:11:09.004286 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" event={"ID":"7fa6920b-f7d9-4758-bba9-356a2c8b1b67","Type":"ContainerStarted","Data":"4d5734dcb478946086c59614a3c405fd97eda4a734701371bd0e6664fc8b864f"} Mar 18 13:11:09.236964 master-0 kubenswrapper[7146]: I0318 13:11:09.236834 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" podStartSLOduration=2.50580038 podStartE2EDuration="14.236816985s" podCreationTimestamp="2026-03-18 13:10:55 +0000 UTC" firstStartedPulling="2026-03-18 13:10:56.106414873 +0000 UTC m=+164.914632234" lastFinishedPulling="2026-03-18 13:11:07.837431478 +0000 UTC m=+176.645648839" observedRunningTime="2026-03-18 13:11:09.234164538 +0000 UTC m=+178.042381899" watchObservedRunningTime="2026-03-18 13:11:09.236816985 +0000 UTC m=+178.045034346" Mar 18 13:11:10.467356 master-0 kubenswrapper[7146]: I0318 13:11:10.467217 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7499"] Mar 18 13:11:10.468011 master-0 kubenswrapper[7146]: I0318 13:11:10.467550 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p7499" podUID="b282ab6f-702c-44cc-942e-f2320b61d42e" containerName="registry-server" containerID="cri-o://53a04c73cfc8bd7f1a40a009ff86c9c965937a42458421d21e8c994da8bb91fb" gracePeriod=2 Mar 18 13:11:10.469787 master-0 kubenswrapper[7146]: I0318 13:11:10.469736 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gwnt"] Mar 18 13:11:10.470063 master-0 kubenswrapper[7146]: I0318 13:11:10.470027 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7gwnt" podUID="1afcb319-16c7-4f27-9db8-21b105a1bdba" containerName="registry-server" containerID="cri-o://34948da4a943b048ac43dee7b404a72ee1f371c5d4ee08c40e1377c27b42dcd5" gracePeriod=2 Mar 18 13:11:10.495550 master-0 kubenswrapper[7146]: I0318 13:11:10.495493 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p546b"] Mar 18 13:11:10.495793 master-0 kubenswrapper[7146]: E0318 13:11:10.495727 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d88fc1-4e92-432e-ac2c-e1c489b15e93" containerName="installer" Mar 18 13:11:10.495793 master-0 kubenswrapper[7146]: I0318 13:11:10.495739 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d88fc1-4e92-432e-ac2c-e1c489b15e93" containerName="installer" Mar 18 13:11:10.495895 master-0 kubenswrapper[7146]: I0318 13:11:10.495830 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d88fc1-4e92-432e-ac2c-e1c489b15e93" containerName="installer" Mar 18 13:11:10.496518 master-0 kubenswrapper[7146]: I0318 13:11:10.496501 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:11:10.499850 master-0 kubenswrapper[7146]: I0318 13:11:10.499811 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nhwvw"] Mar 18 13:11:10.500808 master-0 kubenswrapper[7146]: I0318 13:11:10.500781 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:11:10.501427 master-0 kubenswrapper[7146]: I0318 13:11:10.501326 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-rm9sr" Mar 18 13:11:10.502118 master-0 kubenswrapper[7146]: I0318 13:11:10.502098 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-wl4c6" Mar 18 13:11:10.512112 master-0 kubenswrapper[7146]: I0318 13:11:10.512065 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p546b"] Mar 18 13:11:10.528057 master-0 kubenswrapper[7146]: I0318 13:11:10.527790 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nhwvw"] Mar 18 13:11:10.534188 master-0 kubenswrapper[7146]: I0318 13:11:10.532882 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317a89ea-e9dd-4167-8568-bb36e2431015-utilities\") pod \"community-operators-nhwvw\" (UID: \"317a89ea-e9dd-4167-8568-bb36e2431015\") " pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:11:10.534188 master-0 kubenswrapper[7146]: I0318 13:11:10.532923 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e0fa133-60e7-47d0-996e-7e85aef2a218-catalog-content\") pod \"redhat-marketplace-p546b\" (UID: \"2e0fa133-60e7-47d0-996e-7e85aef2a218\") " pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:11:10.534188 master-0 kubenswrapper[7146]: I0318 13:11:10.532948 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e0fa133-60e7-47d0-996e-7e85aef2a218-utilities\") pod \"redhat-marketplace-p546b\" (UID: \"2e0fa133-60e7-47d0-996e-7e85aef2a218\") " pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:11:10.534188 master-0 kubenswrapper[7146]: I0318 13:11:10.532978 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rccw\" (UniqueName: \"kubernetes.io/projected/2e0fa133-60e7-47d0-996e-7e85aef2a218-kube-api-access-7rccw\") pod \"redhat-marketplace-p546b\" (UID: \"2e0fa133-60e7-47d0-996e-7e85aef2a218\") " pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:11:10.534188 master-0 kubenswrapper[7146]: I0318 13:11:10.533017 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317a89ea-e9dd-4167-8568-bb36e2431015-catalog-content\") pod \"community-operators-nhwvw\" (UID: \"317a89ea-e9dd-4167-8568-bb36e2431015\") " pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:11:10.534188 master-0 kubenswrapper[7146]: I0318 13:11:10.533042 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nllws\" (UniqueName: \"kubernetes.io/projected/317a89ea-e9dd-4167-8568-bb36e2431015-kube-api-access-nllws\") pod \"community-operators-nhwvw\" (UID: \"317a89ea-e9dd-4167-8568-bb36e2431015\") " pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:11:10.633964 master-0 kubenswrapper[7146]: I0318 13:11:10.633872 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nllws\" (UniqueName: \"kubernetes.io/projected/317a89ea-e9dd-4167-8568-bb36e2431015-kube-api-access-nllws\") pod \"community-operators-nhwvw\" (UID: \"317a89ea-e9dd-4167-8568-bb36e2431015\") " pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:11:10.634265 master-0 kubenswrapper[7146]: I0318 13:11:10.633987 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317a89ea-e9dd-4167-8568-bb36e2431015-utilities\") pod \"community-operators-nhwvw\" (UID: \"317a89ea-e9dd-4167-8568-bb36e2431015\") " pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:11:10.634265 master-0 kubenswrapper[7146]: I0318 13:11:10.634021 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e0fa133-60e7-47d0-996e-7e85aef2a218-catalog-content\") pod \"redhat-marketplace-p546b\" (UID: \"2e0fa133-60e7-47d0-996e-7e85aef2a218\") " pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:11:10.634265 master-0 kubenswrapper[7146]: I0318 13:11:10.634041 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e0fa133-60e7-47d0-996e-7e85aef2a218-utilities\") pod \"redhat-marketplace-p546b\" (UID: \"2e0fa133-60e7-47d0-996e-7e85aef2a218\") " pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:11:10.634265 master-0 kubenswrapper[7146]: I0318 13:11:10.634056 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rccw\" (UniqueName: \"kubernetes.io/projected/2e0fa133-60e7-47d0-996e-7e85aef2a218-kube-api-access-7rccw\") pod \"redhat-marketplace-p546b\" (UID: \"2e0fa133-60e7-47d0-996e-7e85aef2a218\") " pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:11:10.634265 master-0 kubenswrapper[7146]: I0318 13:11:10.634106 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317a89ea-e9dd-4167-8568-bb36e2431015-catalog-content\") pod \"community-operators-nhwvw\" (UID: \"317a89ea-e9dd-4167-8568-bb36e2431015\") " pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:11:10.634842 master-0 kubenswrapper[7146]: I0318 13:11:10.634807 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e0fa133-60e7-47d0-996e-7e85aef2a218-catalog-content\") pod \"redhat-marketplace-p546b\" (UID: \"2e0fa133-60e7-47d0-996e-7e85aef2a218\") " pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:11:10.635313 master-0 kubenswrapper[7146]: I0318 13:11:10.635278 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317a89ea-e9dd-4167-8568-bb36e2431015-utilities\") pod \"community-operators-nhwvw\" (UID: \"317a89ea-e9dd-4167-8568-bb36e2431015\") " pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:11:10.636459 master-0 kubenswrapper[7146]: I0318 13:11:10.635530 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e0fa133-60e7-47d0-996e-7e85aef2a218-utilities\") pod \"redhat-marketplace-p546b\" (UID: \"2e0fa133-60e7-47d0-996e-7e85aef2a218\") " pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:11:10.636459 master-0 kubenswrapper[7146]: I0318 13:11:10.635880 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317a89ea-e9dd-4167-8568-bb36e2431015-catalog-content\") pod \"community-operators-nhwvw\" (UID: \"317a89ea-e9dd-4167-8568-bb36e2431015\") " pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:11:10.650077 master-0 kubenswrapper[7146]: I0318 13:11:10.649759 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nllws\" (UniqueName: \"kubernetes.io/projected/317a89ea-e9dd-4167-8568-bb36e2431015-kube-api-access-nllws\") pod \"community-operators-nhwvw\" (UID: \"317a89ea-e9dd-4167-8568-bb36e2431015\") " pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:11:10.651952 master-0 kubenswrapper[7146]: I0318 13:11:10.651890 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rccw\" (UniqueName: \"kubernetes.io/projected/2e0fa133-60e7-47d0-996e-7e85aef2a218-kube-api-access-7rccw\") pod \"redhat-marketplace-p546b\" (UID: \"2e0fa133-60e7-47d0-996e-7e85aef2a218\") " pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:11:10.672686 master-0 kubenswrapper[7146]: I0318 13:11:10.672005 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:11:10.714019 master-0 kubenswrapper[7146]: I0318 13:11:10.713916 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:11:10.962193 master-0 kubenswrapper[7146]: I0318 13:11:10.962156 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gwnt" Mar 18 13:11:10.986736 master-0 kubenswrapper[7146]: I0318 13:11:10.986693 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7499" Mar 18 13:11:11.016535 master-0 kubenswrapper[7146]: I0318 13:11:11.016428 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gwnt" Mar 18 13:11:11.016535 master-0 kubenswrapper[7146]: I0318 13:11:11.016449 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gwnt" event={"ID":"1afcb319-16c7-4f27-9db8-21b105a1bdba","Type":"ContainerDied","Data":"34948da4a943b048ac43dee7b404a72ee1f371c5d4ee08c40e1377c27b42dcd5"} Mar 18 13:11:11.016535 master-0 kubenswrapper[7146]: I0318 13:11:11.016487 7146 scope.go:117] "RemoveContainer" containerID="34948da4a943b048ac43dee7b404a72ee1f371c5d4ee08c40e1377c27b42dcd5" Mar 18 13:11:11.016868 master-0 kubenswrapper[7146]: I0318 13:11:11.016380 7146 generic.go:334] "Generic (PLEG): container finished" podID="1afcb319-16c7-4f27-9db8-21b105a1bdba" containerID="34948da4a943b048ac43dee7b404a72ee1f371c5d4ee08c40e1377c27b42dcd5" exitCode=0 Mar 18 13:11:11.016868 master-0 kubenswrapper[7146]: I0318 13:11:11.016759 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gwnt" event={"ID":"1afcb319-16c7-4f27-9db8-21b105a1bdba","Type":"ContainerDied","Data":"4411dce91ebbb16615b8e509124d82cbd8fb2e5c4cdd14d9d48b5dd2c475d27f"} Mar 18 13:11:11.018925 master-0 kubenswrapper[7146]: I0318 13:11:11.018896 7146 generic.go:334] "Generic (PLEG): container finished" podID="b282ab6f-702c-44cc-942e-f2320b61d42e" containerID="53a04c73cfc8bd7f1a40a009ff86c9c965937a42458421d21e8c994da8bb91fb" exitCode=0 Mar 18 13:11:11.019010 master-0 kubenswrapper[7146]: I0318 13:11:11.018940 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7499" event={"ID":"b282ab6f-702c-44cc-942e-f2320b61d42e","Type":"ContainerDied","Data":"53a04c73cfc8bd7f1a40a009ff86c9c965937a42458421d21e8c994da8bb91fb"} Mar 18 13:11:11.019010 master-0 kubenswrapper[7146]: I0318 13:11:11.018987 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7499" event={"ID":"b282ab6f-702c-44cc-942e-f2320b61d42e","Type":"ContainerDied","Data":"1b8e47c9b17efae6a6cc0dbeb65d00fae0910922cb941a8ca1e5a3ea502f8b3f"} Mar 18 13:11:11.019095 master-0 kubenswrapper[7146]: I0318 13:11:11.019061 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7499" Mar 18 13:11:11.028191 master-0 kubenswrapper[7146]: I0318 13:11:11.027409 7146 scope.go:117] "RemoveContainer" containerID="53b6f2a818ea8ebb3e5400c957f24741f167cf9fba80a4984baaa1bdb7229c33" Mar 18 13:11:11.038621 master-0 kubenswrapper[7146]: I0318 13:11:11.038591 7146 scope.go:117] "RemoveContainer" containerID="1013e7a56d2a54fa4a4d96325895916a4829a0189fbb58f24bbcd3715bd15778" Mar 18 13:11:11.038723 master-0 kubenswrapper[7146]: I0318 13:11:11.038583 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1afcb319-16c7-4f27-9db8-21b105a1bdba-utilities\") pod \"1afcb319-16c7-4f27-9db8-21b105a1bdba\" (UID: \"1afcb319-16c7-4f27-9db8-21b105a1bdba\") " Mar 18 13:11:11.038852 master-0 kubenswrapper[7146]: I0318 13:11:11.038833 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b282ab6f-702c-44cc-942e-f2320b61d42e-utilities\") pod \"b282ab6f-702c-44cc-942e-f2320b61d42e\" (UID: \"b282ab6f-702c-44cc-942e-f2320b61d42e\") " Mar 18 13:11:11.038911 master-0 kubenswrapper[7146]: I0318 13:11:11.038872 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r69fh\" (UniqueName: \"kubernetes.io/projected/b282ab6f-702c-44cc-942e-f2320b61d42e-kube-api-access-r69fh\") pod \"b282ab6f-702c-44cc-942e-f2320b61d42e\" (UID: \"b282ab6f-702c-44cc-942e-f2320b61d42e\") " Mar 18 13:11:11.038975 master-0 kubenswrapper[7146]: I0318 13:11:11.038921 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1afcb319-16c7-4f27-9db8-21b105a1bdba-catalog-content\") pod \"1afcb319-16c7-4f27-9db8-21b105a1bdba\" (UID: \"1afcb319-16c7-4f27-9db8-21b105a1bdba\") " Mar 18 13:11:11.039018 master-0 kubenswrapper[7146]: I0318 13:11:11.038982 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b282ab6f-702c-44cc-942e-f2320b61d42e-catalog-content\") pod \"b282ab6f-702c-44cc-942e-f2320b61d42e\" (UID: \"b282ab6f-702c-44cc-942e-f2320b61d42e\") " Mar 18 13:11:11.039067 master-0 kubenswrapper[7146]: I0318 13:11:11.039045 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z2p8\" (UniqueName: \"kubernetes.io/projected/1afcb319-16c7-4f27-9db8-21b105a1bdba-kube-api-access-8z2p8\") pod \"1afcb319-16c7-4f27-9db8-21b105a1bdba\" (UID: \"1afcb319-16c7-4f27-9db8-21b105a1bdba\") " Mar 18 13:11:11.040637 master-0 kubenswrapper[7146]: I0318 13:11:11.040417 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1afcb319-16c7-4f27-9db8-21b105a1bdba-utilities" (OuterVolumeSpecName: "utilities") pod "1afcb319-16c7-4f27-9db8-21b105a1bdba" (UID: "1afcb319-16c7-4f27-9db8-21b105a1bdba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:11:11.045202 master-0 kubenswrapper[7146]: I0318 13:11:11.040437 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b282ab6f-702c-44cc-942e-f2320b61d42e-utilities" (OuterVolumeSpecName: "utilities") pod "b282ab6f-702c-44cc-942e-f2320b61d42e" (UID: "b282ab6f-702c-44cc-942e-f2320b61d42e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:11:11.049260 master-0 kubenswrapper[7146]: I0318 13:11:11.046256 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b282ab6f-702c-44cc-942e-f2320b61d42e-kube-api-access-r69fh" (OuterVolumeSpecName: "kube-api-access-r69fh") pod "b282ab6f-702c-44cc-942e-f2320b61d42e" (UID: "b282ab6f-702c-44cc-942e-f2320b61d42e"). InnerVolumeSpecName "kube-api-access-r69fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:11:11.049260 master-0 kubenswrapper[7146]: I0318 13:11:11.048594 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1afcb319-16c7-4f27-9db8-21b105a1bdba-kube-api-access-8z2p8" (OuterVolumeSpecName: "kube-api-access-8z2p8") pod "1afcb319-16c7-4f27-9db8-21b105a1bdba" (UID: "1afcb319-16c7-4f27-9db8-21b105a1bdba"). InnerVolumeSpecName "kube-api-access-8z2p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:11:11.050344 master-0 kubenswrapper[7146]: I0318 13:11:11.049526 7146 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1afcb319-16c7-4f27-9db8-21b105a1bdba-utilities\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:11.050344 master-0 kubenswrapper[7146]: I0318 13:11:11.049557 7146 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b282ab6f-702c-44cc-942e-f2320b61d42e-utilities\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:11.050344 master-0 kubenswrapper[7146]: I0318 13:11:11.049572 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r69fh\" (UniqueName: \"kubernetes.io/projected/b282ab6f-702c-44cc-942e-f2320b61d42e-kube-api-access-r69fh\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:11.050344 master-0 kubenswrapper[7146]: I0318 13:11:11.049587 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z2p8\" (UniqueName: \"kubernetes.io/projected/1afcb319-16c7-4f27-9db8-21b105a1bdba-kube-api-access-8z2p8\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:11.068319 master-0 kubenswrapper[7146]: I0318 13:11:11.068270 7146 scope.go:117] "RemoveContainer" containerID="34948da4a943b048ac43dee7b404a72ee1f371c5d4ee08c40e1377c27b42dcd5" Mar 18 13:11:11.070003 master-0 kubenswrapper[7146]: E0318 13:11:11.069815 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34948da4a943b048ac43dee7b404a72ee1f371c5d4ee08c40e1377c27b42dcd5\": container with ID starting with 34948da4a943b048ac43dee7b404a72ee1f371c5d4ee08c40e1377c27b42dcd5 not found: ID does not exist" containerID="34948da4a943b048ac43dee7b404a72ee1f371c5d4ee08c40e1377c27b42dcd5" Mar 18 13:11:11.070003 master-0 kubenswrapper[7146]: I0318 13:11:11.069880 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34948da4a943b048ac43dee7b404a72ee1f371c5d4ee08c40e1377c27b42dcd5"} err="failed to get container status \"34948da4a943b048ac43dee7b404a72ee1f371c5d4ee08c40e1377c27b42dcd5\": rpc error: code = NotFound desc = could not find container \"34948da4a943b048ac43dee7b404a72ee1f371c5d4ee08c40e1377c27b42dcd5\": container with ID starting with 34948da4a943b048ac43dee7b404a72ee1f371c5d4ee08c40e1377c27b42dcd5 not found: ID does not exist" Mar 18 13:11:11.070003 master-0 kubenswrapper[7146]: I0318 13:11:11.069932 7146 scope.go:117] "RemoveContainer" containerID="53b6f2a818ea8ebb3e5400c957f24741f167cf9fba80a4984baaa1bdb7229c33" Mar 18 13:11:11.070500 master-0 kubenswrapper[7146]: E0318 13:11:11.070455 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53b6f2a818ea8ebb3e5400c957f24741f167cf9fba80a4984baaa1bdb7229c33\": container with ID starting with 53b6f2a818ea8ebb3e5400c957f24741f167cf9fba80a4984baaa1bdb7229c33 not found: ID does not exist" containerID="53b6f2a818ea8ebb3e5400c957f24741f167cf9fba80a4984baaa1bdb7229c33" Mar 18 13:11:11.070500 master-0 kubenswrapper[7146]: I0318 13:11:11.070489 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53b6f2a818ea8ebb3e5400c957f24741f167cf9fba80a4984baaa1bdb7229c33"} err="failed to get container status \"53b6f2a818ea8ebb3e5400c957f24741f167cf9fba80a4984baaa1bdb7229c33\": rpc error: code = NotFound desc = could not find container \"53b6f2a818ea8ebb3e5400c957f24741f167cf9fba80a4984baaa1bdb7229c33\": container with ID starting with 53b6f2a818ea8ebb3e5400c957f24741f167cf9fba80a4984baaa1bdb7229c33 not found: ID does not exist" Mar 18 13:11:11.070613 master-0 kubenswrapper[7146]: I0318 13:11:11.070512 7146 scope.go:117] "RemoveContainer" containerID="1013e7a56d2a54fa4a4d96325895916a4829a0189fbb58f24bbcd3715bd15778" Mar 18 13:11:11.070920 master-0 kubenswrapper[7146]: E0318 13:11:11.070877 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1013e7a56d2a54fa4a4d96325895916a4829a0189fbb58f24bbcd3715bd15778\": container with ID starting with 1013e7a56d2a54fa4a4d96325895916a4829a0189fbb58f24bbcd3715bd15778 not found: ID does not exist" containerID="1013e7a56d2a54fa4a4d96325895916a4829a0189fbb58f24bbcd3715bd15778" Mar 18 13:11:11.071015 master-0 kubenswrapper[7146]: I0318 13:11:11.070930 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1013e7a56d2a54fa4a4d96325895916a4829a0189fbb58f24bbcd3715bd15778"} err="failed to get container status \"1013e7a56d2a54fa4a4d96325895916a4829a0189fbb58f24bbcd3715bd15778\": rpc error: code = NotFound desc = could not find container \"1013e7a56d2a54fa4a4d96325895916a4829a0189fbb58f24bbcd3715bd15778\": container with ID starting with 1013e7a56d2a54fa4a4d96325895916a4829a0189fbb58f24bbcd3715bd15778 not found: ID does not exist" Mar 18 13:11:11.071015 master-0 kubenswrapper[7146]: I0318 13:11:11.070961 7146 scope.go:117] "RemoveContainer" containerID="53a04c73cfc8bd7f1a40a009ff86c9c965937a42458421d21e8c994da8bb91fb" Mar 18 13:11:11.082421 master-0 kubenswrapper[7146]: I0318 13:11:11.082347 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1afcb319-16c7-4f27-9db8-21b105a1bdba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1afcb319-16c7-4f27-9db8-21b105a1bdba" (UID: "1afcb319-16c7-4f27-9db8-21b105a1bdba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:11:11.100729 master-0 kubenswrapper[7146]: I0318 13:11:11.097981 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b282ab6f-702c-44cc-942e-f2320b61d42e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b282ab6f-702c-44cc-942e-f2320b61d42e" (UID: "b282ab6f-702c-44cc-942e-f2320b61d42e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:11:11.102115 master-0 kubenswrapper[7146]: I0318 13:11:11.102075 7146 scope.go:117] "RemoveContainer" containerID="9f38cda3aa6b5e31f38fae8c4af784750da4a078aa04e400295175dc0d72fc57" Mar 18 13:11:11.115608 master-0 kubenswrapper[7146]: I0318 13:11:11.115577 7146 scope.go:117] "RemoveContainer" containerID="1796a0b0f16eab9307a88efa9cbf5a005462ea8deb7aaed595c97e7cc944c205" Mar 18 13:11:11.130478 master-0 kubenswrapper[7146]: I0318 13:11:11.130393 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p546b"] Mar 18 13:11:11.131969 master-0 kubenswrapper[7146]: I0318 13:11:11.131599 7146 scope.go:117] "RemoveContainer" containerID="53a04c73cfc8bd7f1a40a009ff86c9c965937a42458421d21e8c994da8bb91fb" Mar 18 13:11:11.132582 master-0 kubenswrapper[7146]: E0318 13:11:11.132423 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53a04c73cfc8bd7f1a40a009ff86c9c965937a42458421d21e8c994da8bb91fb\": container with ID starting with 53a04c73cfc8bd7f1a40a009ff86c9c965937a42458421d21e8c994da8bb91fb not found: ID does not exist" containerID="53a04c73cfc8bd7f1a40a009ff86c9c965937a42458421d21e8c994da8bb91fb" Mar 18 13:11:11.132582 master-0 kubenswrapper[7146]: I0318 13:11:11.132454 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53a04c73cfc8bd7f1a40a009ff86c9c965937a42458421d21e8c994da8bb91fb"} err="failed to get container status \"53a04c73cfc8bd7f1a40a009ff86c9c965937a42458421d21e8c994da8bb91fb\": rpc error: code = NotFound desc = could not find container \"53a04c73cfc8bd7f1a40a009ff86c9c965937a42458421d21e8c994da8bb91fb\": container with ID starting with 53a04c73cfc8bd7f1a40a009ff86c9c965937a42458421d21e8c994da8bb91fb not found: ID does not exist" Mar 18 13:11:11.132582 master-0 kubenswrapper[7146]: I0318 13:11:11.132477 7146 scope.go:117] "RemoveContainer" containerID="9f38cda3aa6b5e31f38fae8c4af784750da4a078aa04e400295175dc0d72fc57" Mar 18 13:11:11.133230 master-0 kubenswrapper[7146]: E0318 13:11:11.132720 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f38cda3aa6b5e31f38fae8c4af784750da4a078aa04e400295175dc0d72fc57\": container with ID starting with 9f38cda3aa6b5e31f38fae8c4af784750da4a078aa04e400295175dc0d72fc57 not found: ID does not exist" containerID="9f38cda3aa6b5e31f38fae8c4af784750da4a078aa04e400295175dc0d72fc57" Mar 18 13:11:11.133230 master-0 kubenswrapper[7146]: I0318 13:11:11.132746 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f38cda3aa6b5e31f38fae8c4af784750da4a078aa04e400295175dc0d72fc57"} err="failed to get container status \"9f38cda3aa6b5e31f38fae8c4af784750da4a078aa04e400295175dc0d72fc57\": rpc error: code = NotFound desc = could not find container \"9f38cda3aa6b5e31f38fae8c4af784750da4a078aa04e400295175dc0d72fc57\": container with ID starting with 9f38cda3aa6b5e31f38fae8c4af784750da4a078aa04e400295175dc0d72fc57 not found: ID does not exist" Mar 18 13:11:11.133230 master-0 kubenswrapper[7146]: I0318 13:11:11.132760 7146 scope.go:117] "RemoveContainer" containerID="1796a0b0f16eab9307a88efa9cbf5a005462ea8deb7aaed595c97e7cc944c205" Mar 18 13:11:11.133230 master-0 kubenswrapper[7146]: E0318 13:11:11.133092 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1796a0b0f16eab9307a88efa9cbf5a005462ea8deb7aaed595c97e7cc944c205\": container with ID starting with 1796a0b0f16eab9307a88efa9cbf5a005462ea8deb7aaed595c97e7cc944c205 not found: ID does not exist" containerID="1796a0b0f16eab9307a88efa9cbf5a005462ea8deb7aaed595c97e7cc944c205" Mar 18 13:11:11.133230 master-0 kubenswrapper[7146]: I0318 13:11:11.133128 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1796a0b0f16eab9307a88efa9cbf5a005462ea8deb7aaed595c97e7cc944c205"} err="failed to get container status \"1796a0b0f16eab9307a88efa9cbf5a005462ea8deb7aaed595c97e7cc944c205\": rpc error: code = NotFound desc = could not find container \"1796a0b0f16eab9307a88efa9cbf5a005462ea8deb7aaed595c97e7cc944c205\": container with ID starting with 1796a0b0f16eab9307a88efa9cbf5a005462ea8deb7aaed595c97e7cc944c205 not found: ID does not exist" Mar 18 13:11:11.135797 master-0 kubenswrapper[7146]: W0318 13:11:11.135764 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e0fa133_60e7_47d0_996e_7e85aef2a218.slice/crio-1a378c5d453113157a9411837f552a7009188322a9c41c64301dc36db4c9e17e WatchSource:0}: Error finding container 1a378c5d453113157a9411837f552a7009188322a9c41c64301dc36db4c9e17e: Status 404 returned error can't find the container with id 1a378c5d453113157a9411837f552a7009188322a9c41c64301dc36db4c9e17e Mar 18 13:11:11.151145 master-0 kubenswrapper[7146]: I0318 13:11:11.151064 7146 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1afcb319-16c7-4f27-9db8-21b105a1bdba-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:11.151145 master-0 kubenswrapper[7146]: I0318 13:11:11.151123 7146 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b282ab6f-702c-44cc-942e-f2320b61d42e-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:11.202259 master-0 kubenswrapper[7146]: I0318 13:11:11.202205 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nhwvw"] Mar 18 13:11:11.216446 master-0 kubenswrapper[7146]: W0318 13:11:11.216372 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod317a89ea_e9dd_4167_8568_bb36e2431015.slice/crio-0679fb9ef1dd358deba35d738ff1064e3cdf869b26696ba0d14a1ac6ad26f588 WatchSource:0}: Error finding container 0679fb9ef1dd358deba35d738ff1064e3cdf869b26696ba0d14a1ac6ad26f588: Status 404 returned error can't find the container with id 0679fb9ef1dd358deba35d738ff1064e3cdf869b26696ba0d14a1ac6ad26f588 Mar 18 13:11:11.370115 master-0 kubenswrapper[7146]: I0318 13:11:11.369925 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gwnt"] Mar 18 13:11:11.372942 master-0 kubenswrapper[7146]: I0318 13:11:11.372876 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gwnt"] Mar 18 13:11:11.397383 master-0 kubenswrapper[7146]: I0318 13:11:11.397312 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7499"] Mar 18 13:11:11.399640 master-0 kubenswrapper[7146]: I0318 13:11:11.399553 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p7499"] Mar 18 13:11:11.555572 master-0 kubenswrapper[7146]: I0318 13:11:11.555505 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:11.555572 master-0 kubenswrapper[7146]: I0318 13:11:11.555565 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:11.556256 master-0 kubenswrapper[7146]: I0318 13:11:11.556225 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:11.556316 master-0 kubenswrapper[7146]: I0318 13:11:11.556270 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:11.559593 master-0 kubenswrapper[7146]: I0318 13:11:11.559562 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:11.561695 master-0 kubenswrapper[7146]: I0318 13:11:11.560684 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:12.026317 master-0 kubenswrapper[7146]: I0318 13:11:12.026262 7146 generic.go:334] "Generic (PLEG): container finished" podID="2e0fa133-60e7-47d0-996e-7e85aef2a218" containerID="836f1a7c930855d212400a0b9071a021a023048ff4b32354f92013971f61bd95" exitCode=0 Mar 18 13:11:12.026514 master-0 kubenswrapper[7146]: I0318 13:11:12.026343 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p546b" event={"ID":"2e0fa133-60e7-47d0-996e-7e85aef2a218","Type":"ContainerDied","Data":"836f1a7c930855d212400a0b9071a021a023048ff4b32354f92013971f61bd95"} Mar 18 13:11:12.026514 master-0 kubenswrapper[7146]: I0318 13:11:12.026370 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p546b" event={"ID":"2e0fa133-60e7-47d0-996e-7e85aef2a218","Type":"ContainerStarted","Data":"1a378c5d453113157a9411837f552a7009188322a9c41c64301dc36db4c9e17e"} Mar 18 13:11:12.029264 master-0 kubenswrapper[7146]: I0318 13:11:12.029202 7146 generic.go:334] "Generic (PLEG): container finished" podID="317a89ea-e9dd-4167-8568-bb36e2431015" containerID="96bc6ce0c52ae9fd7504e8b7f02dc2906216b82766d2b59e05d4794bbbc1c386" exitCode=0 Mar 18 13:11:12.029890 master-0 kubenswrapper[7146]: I0318 13:11:12.029854 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwvw" event={"ID":"317a89ea-e9dd-4167-8568-bb36e2431015","Type":"ContainerDied","Data":"96bc6ce0c52ae9fd7504e8b7f02dc2906216b82766d2b59e05d4794bbbc1c386"} Mar 18 13:11:12.029964 master-0 kubenswrapper[7146]: I0318 13:11:12.029893 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwvw" event={"ID":"317a89ea-e9dd-4167-8568-bb36e2431015","Type":"ContainerStarted","Data":"0679fb9ef1dd358deba35d738ff1064e3cdf869b26696ba0d14a1ac6ad26f588"} Mar 18 13:11:12.034747 master-0 kubenswrapper[7146]: I0318 13:11:12.034709 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:12.036510 master-0 kubenswrapper[7146]: I0318 13:11:12.036471 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:12.649708 master-0 kubenswrapper[7146]: I0318 13:11:12.647835 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-srjhk"] Mar 18 13:11:12.649708 master-0 kubenswrapper[7146]: I0318 13:11:12.648122 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-srjhk" podUID="e2df6721-ccd2-41e5-bfd5-bd8d277dd57a" containerName="registry-server" containerID="cri-o://0c2e3826119c940a98a6c86c39bf2421746ab171d1d0885df4be93564bd994a2" gracePeriod=2 Mar 18 13:11:12.839118 master-0 kubenswrapper[7146]: I0318 13:11:12.839066 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s6vkz"] Mar 18 13:11:12.839497 master-0 kubenswrapper[7146]: I0318 13:11:12.839446 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s6vkz" podUID="aeed4251-c92a-49e9-a785-9903d84ca0d6" containerName="registry-server" containerID="cri-o://bf6103b476cbe5f000efeec38d0e1eab0cb03070f7c4c9474f643324ed27d01a" gracePeriod=2 Mar 18 13:11:12.986787 master-0 kubenswrapper[7146]: I0318 13:11:12.986746 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srjhk" Mar 18 13:11:13.064579 master-0 kubenswrapper[7146]: I0318 13:11:13.064471 7146 generic.go:334] "Generic (PLEG): container finished" podID="aeed4251-c92a-49e9-a785-9903d84ca0d6" containerID="bf6103b476cbe5f000efeec38d0e1eab0cb03070f7c4c9474f643324ed27d01a" exitCode=0 Mar 18 13:11:13.064767 master-0 kubenswrapper[7146]: I0318 13:11:13.064723 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6vkz" event={"ID":"aeed4251-c92a-49e9-a785-9903d84ca0d6","Type":"ContainerDied","Data":"bf6103b476cbe5f000efeec38d0e1eab0cb03070f7c4c9474f643324ed27d01a"} Mar 18 13:11:13.070176 master-0 kubenswrapper[7146]: I0318 13:11:13.070126 7146 generic.go:334] "Generic (PLEG): container finished" podID="e2df6721-ccd2-41e5-bfd5-bd8d277dd57a" containerID="0c2e3826119c940a98a6c86c39bf2421746ab171d1d0885df4be93564bd994a2" exitCode=0 Mar 18 13:11:13.071171 master-0 kubenswrapper[7146]: I0318 13:11:13.071099 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srjhk" Mar 18 13:11:13.071589 master-0 kubenswrapper[7146]: I0318 13:11:13.071554 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srjhk" event={"ID":"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a","Type":"ContainerDied","Data":"0c2e3826119c940a98a6c86c39bf2421746ab171d1d0885df4be93564bd994a2"} Mar 18 13:11:13.071632 master-0 kubenswrapper[7146]: I0318 13:11:13.071602 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srjhk" event={"ID":"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a","Type":"ContainerDied","Data":"375f229e561ce7c3ba595936a0178638cea02d8b22c4089efd4e83226dfb0f4d"} Mar 18 13:11:13.071632 master-0 kubenswrapper[7146]: I0318 13:11:13.071623 7146 scope.go:117] "RemoveContainer" containerID="0c2e3826119c940a98a6c86c39bf2421746ab171d1d0885df4be93564bd994a2" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: I0318 13:11:13.075125 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d7pj2"] Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: E0318 13:11:13.075368 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2df6721-ccd2-41e5-bfd5-bd8d277dd57a" containerName="extract-content" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: I0318 13:11:13.075386 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2df6721-ccd2-41e5-bfd5-bd8d277dd57a" containerName="extract-content" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: E0318 13:11:13.075401 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2df6721-ccd2-41e5-bfd5-bd8d277dd57a" containerName="registry-server" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: I0318 13:11:13.075409 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2df6721-ccd2-41e5-bfd5-bd8d277dd57a" containerName="registry-server" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: E0318 13:11:13.075421 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1afcb319-16c7-4f27-9db8-21b105a1bdba" containerName="registry-server" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: I0318 13:11:13.075428 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="1afcb319-16c7-4f27-9db8-21b105a1bdba" containerName="registry-server" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: E0318 13:11:13.075442 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b282ab6f-702c-44cc-942e-f2320b61d42e" containerName="extract-utilities" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: I0318 13:11:13.075452 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="b282ab6f-702c-44cc-942e-f2320b61d42e" containerName="extract-utilities" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: E0318 13:11:13.075461 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1afcb319-16c7-4f27-9db8-21b105a1bdba" containerName="extract-content" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: I0318 13:11:13.075468 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="1afcb319-16c7-4f27-9db8-21b105a1bdba" containerName="extract-content" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: E0318 13:11:13.075479 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2df6721-ccd2-41e5-bfd5-bd8d277dd57a" containerName="extract-utilities" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: I0318 13:11:13.075486 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2df6721-ccd2-41e5-bfd5-bd8d277dd57a" containerName="extract-utilities" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: E0318 13:11:13.075497 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b282ab6f-702c-44cc-942e-f2320b61d42e" containerName="registry-server" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: I0318 13:11:13.075505 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="b282ab6f-702c-44cc-942e-f2320b61d42e" containerName="registry-server" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: E0318 13:11:13.075515 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b282ab6f-702c-44cc-942e-f2320b61d42e" containerName="extract-content" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: I0318 13:11:13.075523 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="b282ab6f-702c-44cc-942e-f2320b61d42e" containerName="extract-content" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: E0318 13:11:13.075539 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1afcb319-16c7-4f27-9db8-21b105a1bdba" containerName="extract-utilities" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: I0318 13:11:13.075546 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="1afcb319-16c7-4f27-9db8-21b105a1bdba" containerName="extract-utilities" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: I0318 13:11:13.075647 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2df6721-ccd2-41e5-bfd5-bd8d277dd57a" containerName="registry-server" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: I0318 13:11:13.075665 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="b282ab6f-702c-44cc-942e-f2320b61d42e" containerName="registry-server" Mar 18 13:11:13.075796 master-0 kubenswrapper[7146]: I0318 13:11:13.075680 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="1afcb319-16c7-4f27-9db8-21b105a1bdba" containerName="registry-server" Mar 18 13:11:13.083644 master-0 kubenswrapper[7146]: I0318 13:11:13.078244 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:11:13.083644 master-0 kubenswrapper[7146]: I0318 13:11:13.080703 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-utilities\") pod \"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a\" (UID: \"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a\") " Mar 18 13:11:13.083644 master-0 kubenswrapper[7146]: I0318 13:11:13.080835 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-catalog-content\") pod \"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a\" (UID: \"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a\") " Mar 18 13:11:13.083644 master-0 kubenswrapper[7146]: I0318 13:11:13.080881 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x84xd\" (UniqueName: \"kubernetes.io/projected/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-kube-api-access-x84xd\") pod \"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a\" (UID: \"e2df6721-ccd2-41e5-bfd5-bd8d277dd57a\") " Mar 18 13:11:13.085728 master-0 kubenswrapper[7146]: I0318 13:11:13.085145 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-kube-api-access-x84xd" (OuterVolumeSpecName: "kube-api-access-x84xd") pod "e2df6721-ccd2-41e5-bfd5-bd8d277dd57a" (UID: "e2df6721-ccd2-41e5-bfd5-bd8d277dd57a"). InnerVolumeSpecName "kube-api-access-x84xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:11:13.085728 master-0 kubenswrapper[7146]: I0318 13:11:13.085301 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-utilities" (OuterVolumeSpecName: "utilities") pod "e2df6721-ccd2-41e5-bfd5-bd8d277dd57a" (UID: "e2df6721-ccd2-41e5-bfd5-bd8d277dd57a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:11:13.092700 master-0 kubenswrapper[7146]: I0318 13:11:13.092666 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-68f42" Mar 18 13:11:13.098920 master-0 kubenswrapper[7146]: I0318 13:11:13.098585 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7pj2"] Mar 18 13:11:13.129207 master-0 kubenswrapper[7146]: I0318 13:11:13.129160 7146 scope.go:117] "RemoveContainer" containerID="a1d12eab964d3773f85fd1c53d44479112b0a13ea00f11b75308ae575b897e5b" Mar 18 13:11:13.136984 master-0 kubenswrapper[7146]: I0318 13:11:13.136907 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2df6721-ccd2-41e5-bfd5-bd8d277dd57a" (UID: "e2df6721-ccd2-41e5-bfd5-bd8d277dd57a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:11:13.167147 master-0 kubenswrapper[7146]: I0318 13:11:13.166814 7146 scope.go:117] "RemoveContainer" containerID="1680bc7cb7bda0d4689e9c5a8cd2d140dc338dc26c871d197cd2591dce5e5a99" Mar 18 13:11:13.198084 master-0 kubenswrapper[7146]: I0318 13:11:13.197877 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2cf9274-25d2-4576-bbef-1d416dfff0a9-utilities\") pod \"certified-operators-d7pj2\" (UID: \"d2cf9274-25d2-4576-bbef-1d416dfff0a9\") " pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:11:13.198084 master-0 kubenswrapper[7146]: I0318 13:11:13.197992 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2cf9274-25d2-4576-bbef-1d416dfff0a9-catalog-content\") pod \"certified-operators-d7pj2\" (UID: \"d2cf9274-25d2-4576-bbef-1d416dfff0a9\") " pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:11:13.198599 master-0 kubenswrapper[7146]: I0318 13:11:13.198471 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vljm6\" (UniqueName: \"kubernetes.io/projected/d2cf9274-25d2-4576-bbef-1d416dfff0a9-kube-api-access-vljm6\") pod \"certified-operators-d7pj2\" (UID: \"d2cf9274-25d2-4576-bbef-1d416dfff0a9\") " pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:11:13.198797 master-0 kubenswrapper[7146]: I0318 13:11:13.198758 7146 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-utilities\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:13.198797 master-0 kubenswrapper[7146]: I0318 13:11:13.198786 7146 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:13.198797 master-0 kubenswrapper[7146]: I0318 13:11:13.198797 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x84xd\" (UniqueName: \"kubernetes.io/projected/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a-kube-api-access-x84xd\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:13.202642 master-0 kubenswrapper[7146]: I0318 13:11:13.202604 7146 scope.go:117] "RemoveContainer" containerID="0c2e3826119c940a98a6c86c39bf2421746ab171d1d0885df4be93564bd994a2" Mar 18 13:11:13.203317 master-0 kubenswrapper[7146]: E0318 13:11:13.203218 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c2e3826119c940a98a6c86c39bf2421746ab171d1d0885df4be93564bd994a2\": container with ID starting with 0c2e3826119c940a98a6c86c39bf2421746ab171d1d0885df4be93564bd994a2 not found: ID does not exist" containerID="0c2e3826119c940a98a6c86c39bf2421746ab171d1d0885df4be93564bd994a2" Mar 18 13:11:13.203317 master-0 kubenswrapper[7146]: I0318 13:11:13.203265 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2e3826119c940a98a6c86c39bf2421746ab171d1d0885df4be93564bd994a2"} err="failed to get container status \"0c2e3826119c940a98a6c86c39bf2421746ab171d1d0885df4be93564bd994a2\": rpc error: code = NotFound desc = could not find container \"0c2e3826119c940a98a6c86c39bf2421746ab171d1d0885df4be93564bd994a2\": container with ID starting with 0c2e3826119c940a98a6c86c39bf2421746ab171d1d0885df4be93564bd994a2 not found: ID does not exist" Mar 18 13:11:13.203317 master-0 kubenswrapper[7146]: I0318 13:11:13.203296 7146 scope.go:117] "RemoveContainer" containerID="a1d12eab964d3773f85fd1c53d44479112b0a13ea00f11b75308ae575b897e5b" Mar 18 13:11:13.203613 master-0 kubenswrapper[7146]: E0318 13:11:13.203584 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1d12eab964d3773f85fd1c53d44479112b0a13ea00f11b75308ae575b897e5b\": container with ID starting with a1d12eab964d3773f85fd1c53d44479112b0a13ea00f11b75308ae575b897e5b not found: ID does not exist" containerID="a1d12eab964d3773f85fd1c53d44479112b0a13ea00f11b75308ae575b897e5b" Mar 18 13:11:13.203666 master-0 kubenswrapper[7146]: I0318 13:11:13.203613 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d12eab964d3773f85fd1c53d44479112b0a13ea00f11b75308ae575b897e5b"} err="failed to get container status \"a1d12eab964d3773f85fd1c53d44479112b0a13ea00f11b75308ae575b897e5b\": rpc error: code = NotFound desc = could not find container \"a1d12eab964d3773f85fd1c53d44479112b0a13ea00f11b75308ae575b897e5b\": container with ID starting with a1d12eab964d3773f85fd1c53d44479112b0a13ea00f11b75308ae575b897e5b not found: ID does not exist" Mar 18 13:11:13.203666 master-0 kubenswrapper[7146]: I0318 13:11:13.203634 7146 scope.go:117] "RemoveContainer" containerID="1680bc7cb7bda0d4689e9c5a8cd2d140dc338dc26c871d197cd2591dce5e5a99" Mar 18 13:11:13.204052 master-0 kubenswrapper[7146]: E0318 13:11:13.204035 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1680bc7cb7bda0d4689e9c5a8cd2d140dc338dc26c871d197cd2591dce5e5a99\": container with ID starting with 1680bc7cb7bda0d4689e9c5a8cd2d140dc338dc26c871d197cd2591dce5e5a99 not found: ID does not exist" containerID="1680bc7cb7bda0d4689e9c5a8cd2d140dc338dc26c871d197cd2591dce5e5a99" Mar 18 13:11:13.204108 master-0 kubenswrapper[7146]: I0318 13:11:13.204056 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1680bc7cb7bda0d4689e9c5a8cd2d140dc338dc26c871d197cd2591dce5e5a99"} err="failed to get container status \"1680bc7cb7bda0d4689e9c5a8cd2d140dc338dc26c871d197cd2591dce5e5a99\": rpc error: code = NotFound desc = could not find container \"1680bc7cb7bda0d4689e9c5a8cd2d140dc338dc26c871d197cd2591dce5e5a99\": container with ID starting with 1680bc7cb7bda0d4689e9c5a8cd2d140dc338dc26c871d197cd2591dce5e5a99 not found: ID does not exist" Mar 18 13:11:13.244625 master-0 kubenswrapper[7146]: I0318 13:11:13.244557 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-459lq"] Mar 18 13:11:13.246238 master-0 kubenswrapper[7146]: I0318 13:11:13.246152 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:11:13.248295 master-0 kubenswrapper[7146]: I0318 13:11:13.248205 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-2d45m" Mar 18 13:11:13.262608 master-0 kubenswrapper[7146]: I0318 13:11:13.262557 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-459lq"] Mar 18 13:11:13.302407 master-0 kubenswrapper[7146]: I0318 13:11:13.302343 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vljm6\" (UniqueName: \"kubernetes.io/projected/d2cf9274-25d2-4576-bbef-1d416dfff0a9-kube-api-access-vljm6\") pod \"certified-operators-d7pj2\" (UID: \"d2cf9274-25d2-4576-bbef-1d416dfff0a9\") " pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:11:13.302586 master-0 kubenswrapper[7146]: I0318 13:11:13.302465 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-catalog-content\") pod \"redhat-operators-459lq\" (UID: \"35d8f08f-4c57-44e0-8e8f-3969287e2a5a\") " pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:11:13.302586 master-0 kubenswrapper[7146]: I0318 13:11:13.302544 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-utilities\") pod \"redhat-operators-459lq\" (UID: \"35d8f08f-4c57-44e0-8e8f-3969287e2a5a\") " pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:11:13.303174 master-0 kubenswrapper[7146]: I0318 13:11:13.302624 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6d7j\" (UniqueName: \"kubernetes.io/projected/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-kube-api-access-q6d7j\") pod \"redhat-operators-459lq\" (UID: \"35d8f08f-4c57-44e0-8e8f-3969287e2a5a\") " pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:11:13.303259 master-0 kubenswrapper[7146]: I0318 13:11:13.303206 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2cf9274-25d2-4576-bbef-1d416dfff0a9-utilities\") pod \"certified-operators-d7pj2\" (UID: \"d2cf9274-25d2-4576-bbef-1d416dfff0a9\") " pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:11:13.303316 master-0 kubenswrapper[7146]: I0318 13:11:13.303274 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2cf9274-25d2-4576-bbef-1d416dfff0a9-catalog-content\") pod \"certified-operators-d7pj2\" (UID: \"d2cf9274-25d2-4576-bbef-1d416dfff0a9\") " pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:11:13.303753 master-0 kubenswrapper[7146]: I0318 13:11:13.303668 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6vkz" Mar 18 13:11:13.304254 master-0 kubenswrapper[7146]: I0318 13:11:13.304204 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2cf9274-25d2-4576-bbef-1d416dfff0a9-catalog-content\") pod \"certified-operators-d7pj2\" (UID: \"d2cf9274-25d2-4576-bbef-1d416dfff0a9\") " pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:11:13.304254 master-0 kubenswrapper[7146]: I0318 13:11:13.304226 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2cf9274-25d2-4576-bbef-1d416dfff0a9-utilities\") pod \"certified-operators-d7pj2\" (UID: \"d2cf9274-25d2-4576-bbef-1d416dfff0a9\") " pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:11:13.319719 master-0 kubenswrapper[7146]: I0318 13:11:13.319679 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vljm6\" (UniqueName: \"kubernetes.io/projected/d2cf9274-25d2-4576-bbef-1d416dfff0a9-kube-api-access-vljm6\") pod \"certified-operators-d7pj2\" (UID: \"d2cf9274-25d2-4576-bbef-1d416dfff0a9\") " pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:11:13.367657 master-0 kubenswrapper[7146]: I0318 13:11:13.367604 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1afcb319-16c7-4f27-9db8-21b105a1bdba" path="/var/lib/kubelet/pods/1afcb319-16c7-4f27-9db8-21b105a1bdba/volumes" Mar 18 13:11:13.369016 master-0 kubenswrapper[7146]: I0318 13:11:13.368990 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b282ab6f-702c-44cc-942e-f2320b61d42e" path="/var/lib/kubelet/pods/b282ab6f-702c-44cc-942e-f2320b61d42e/volumes" Mar 18 13:11:13.404433 master-0 kubenswrapper[7146]: I0318 13:11:13.404339 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrzkc\" (UniqueName: \"kubernetes.io/projected/aeed4251-c92a-49e9-a785-9903d84ca0d6-kube-api-access-hrzkc\") pod \"aeed4251-c92a-49e9-a785-9903d84ca0d6\" (UID: \"aeed4251-c92a-49e9-a785-9903d84ca0d6\") " Mar 18 13:11:13.404433 master-0 kubenswrapper[7146]: I0318 13:11:13.404392 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeed4251-c92a-49e9-a785-9903d84ca0d6-utilities\") pod \"aeed4251-c92a-49e9-a785-9903d84ca0d6\" (UID: \"aeed4251-c92a-49e9-a785-9903d84ca0d6\") " Mar 18 13:11:13.404918 master-0 kubenswrapper[7146]: I0318 13:11:13.404465 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeed4251-c92a-49e9-a785-9903d84ca0d6-catalog-content\") pod \"aeed4251-c92a-49e9-a785-9903d84ca0d6\" (UID: \"aeed4251-c92a-49e9-a785-9903d84ca0d6\") " Mar 18 13:11:13.405518 master-0 kubenswrapper[7146]: I0318 13:11:13.405483 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-utilities\") pod \"redhat-operators-459lq\" (UID: \"35d8f08f-4c57-44e0-8e8f-3969287e2a5a\") " pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:11:13.405604 master-0 kubenswrapper[7146]: I0318 13:11:13.405538 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6d7j\" (UniqueName: \"kubernetes.io/projected/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-kube-api-access-q6d7j\") pod \"redhat-operators-459lq\" (UID: \"35d8f08f-4c57-44e0-8e8f-3969287e2a5a\") " pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:11:13.405604 master-0 kubenswrapper[7146]: I0318 13:11:13.405583 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-catalog-content\") pod \"redhat-operators-459lq\" (UID: \"35d8f08f-4c57-44e0-8e8f-3969287e2a5a\") " pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:11:13.407475 master-0 kubenswrapper[7146]: I0318 13:11:13.407181 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-catalog-content\") pod \"redhat-operators-459lq\" (UID: \"35d8f08f-4c57-44e0-8e8f-3969287e2a5a\") " pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:11:13.407475 master-0 kubenswrapper[7146]: I0318 13:11:13.407292 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeed4251-c92a-49e9-a785-9903d84ca0d6-utilities" (OuterVolumeSpecName: "utilities") pod "aeed4251-c92a-49e9-a785-9903d84ca0d6" (UID: "aeed4251-c92a-49e9-a785-9903d84ca0d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:11:13.407727 master-0 kubenswrapper[7146]: I0318 13:11:13.407681 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-utilities\") pod \"redhat-operators-459lq\" (UID: \"35d8f08f-4c57-44e0-8e8f-3969287e2a5a\") " pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:11:13.410338 master-0 kubenswrapper[7146]: I0318 13:11:13.410245 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeed4251-c92a-49e9-a785-9903d84ca0d6-kube-api-access-hrzkc" (OuterVolumeSpecName: "kube-api-access-hrzkc") pod "aeed4251-c92a-49e9-a785-9903d84ca0d6" (UID: "aeed4251-c92a-49e9-a785-9903d84ca0d6"). InnerVolumeSpecName "kube-api-access-hrzkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:11:13.411503 master-0 kubenswrapper[7146]: I0318 13:11:13.411473 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-srjhk"] Mar 18 13:11:13.414030 master-0 kubenswrapper[7146]: I0318 13:11:13.413925 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-srjhk"] Mar 18 13:11:13.423588 master-0 kubenswrapper[7146]: I0318 13:11:13.423539 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6d7j\" (UniqueName: \"kubernetes.io/projected/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-kube-api-access-q6d7j\") pod \"redhat-operators-459lq\" (UID: \"35d8f08f-4c57-44e0-8e8f-3969287e2a5a\") " pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:11:13.475440 master-0 kubenswrapper[7146]: I0318 13:11:13.475313 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:11:13.507039 master-0 kubenswrapper[7146]: I0318 13:11:13.506984 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrzkc\" (UniqueName: \"kubernetes.io/projected/aeed4251-c92a-49e9-a785-9903d84ca0d6-kube-api-access-hrzkc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:13.507039 master-0 kubenswrapper[7146]: I0318 13:11:13.507017 7146 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeed4251-c92a-49e9-a785-9903d84ca0d6-utilities\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:13.547683 master-0 kubenswrapper[7146]: I0318 13:11:13.547591 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeed4251-c92a-49e9-a785-9903d84ca0d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aeed4251-c92a-49e9-a785-9903d84ca0d6" (UID: "aeed4251-c92a-49e9-a785-9903d84ca0d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:11:13.599548 master-0 kubenswrapper[7146]: I0318 13:11:13.599496 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:11:13.608593 master-0 kubenswrapper[7146]: I0318 13:11:13.608553 7146 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeed4251-c92a-49e9-a785-9903d84ca0d6-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:13.863171 master-0 kubenswrapper[7146]: I0318 13:11:13.862773 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7pj2"] Mar 18 13:11:14.004593 master-0 kubenswrapper[7146]: I0318 13:11:14.004544 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-459lq"] Mar 18 13:11:14.017972 master-0 kubenswrapper[7146]: W0318 13:11:14.016366 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35d8f08f_4c57_44e0_8e8f_3969287e2a5a.slice/crio-a44d9eb65400a7e0c0da7a14a1ecf19a155dd4cc1a996834044260457aba64a9 WatchSource:0}: Error finding container a44d9eb65400a7e0c0da7a14a1ecf19a155dd4cc1a996834044260457aba64a9: Status 404 returned error can't find the container with id a44d9eb65400a7e0c0da7a14a1ecf19a155dd4cc1a996834044260457aba64a9 Mar 18 13:11:14.076873 master-0 kubenswrapper[7146]: I0318 13:11:14.076754 7146 generic.go:334] "Generic (PLEG): container finished" podID="d2cf9274-25d2-4576-bbef-1d416dfff0a9" containerID="4e40f363b03daa87aca7cb71f28f83a28265ae86967a44b24bfca71c4bc0dc50" exitCode=0 Mar 18 13:11:14.076873 master-0 kubenswrapper[7146]: I0318 13:11:14.076834 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7pj2" event={"ID":"d2cf9274-25d2-4576-bbef-1d416dfff0a9","Type":"ContainerDied","Data":"4e40f363b03daa87aca7cb71f28f83a28265ae86967a44b24bfca71c4bc0dc50"} Mar 18 13:11:14.077294 master-0 kubenswrapper[7146]: I0318 13:11:14.077222 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7pj2" event={"ID":"d2cf9274-25d2-4576-bbef-1d416dfff0a9","Type":"ContainerStarted","Data":"6521ed821b17acabe4b6b4013792bafdd43c6335da5eba7b335ddb8b9407cf09"} Mar 18 13:11:14.081401 master-0 kubenswrapper[7146]: I0318 13:11:14.081324 7146 generic.go:334] "Generic (PLEG): container finished" podID="317a89ea-e9dd-4167-8568-bb36e2431015" containerID="a1b3cbd921167497cc25d4be38b7e050d4aa38d0f715b02595c72432dd0720c9" exitCode=0 Mar 18 13:11:14.081401 master-0 kubenswrapper[7146]: I0318 13:11:14.081369 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwvw" event={"ID":"317a89ea-e9dd-4167-8568-bb36e2431015","Type":"ContainerDied","Data":"a1b3cbd921167497cc25d4be38b7e050d4aa38d0f715b02595c72432dd0720c9"} Mar 18 13:11:14.085244 master-0 kubenswrapper[7146]: I0318 13:11:14.084486 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6vkz" Mar 18 13:11:14.085380 master-0 kubenswrapper[7146]: I0318 13:11:14.085265 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6vkz" event={"ID":"aeed4251-c92a-49e9-a785-9903d84ca0d6","Type":"ContainerDied","Data":"d34f95855ed1c06ff9e9c9318208614cb98f25ab4499c2a654f96d13704f90e3"} Mar 18 13:11:14.085380 master-0 kubenswrapper[7146]: I0318 13:11:14.085314 7146 scope.go:117] "RemoveContainer" containerID="bf6103b476cbe5f000efeec38d0e1eab0cb03070f7c4c9474f643324ed27d01a" Mar 18 13:11:14.089728 master-0 kubenswrapper[7146]: I0318 13:11:14.089695 7146 generic.go:334] "Generic (PLEG): container finished" podID="2e0fa133-60e7-47d0-996e-7e85aef2a218" containerID="d80a42f9544d6f5e1c4d2d61a2c430a6b656748331ed61e7746687405bcba5ee" exitCode=0 Mar 18 13:11:14.090205 master-0 kubenswrapper[7146]: I0318 13:11:14.090153 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p546b" event={"ID":"2e0fa133-60e7-47d0-996e-7e85aef2a218","Type":"ContainerDied","Data":"d80a42f9544d6f5e1c4d2d61a2c430a6b656748331ed61e7746687405bcba5ee"} Mar 18 13:11:14.095970 master-0 kubenswrapper[7146]: I0318 13:11:14.095898 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-459lq" event={"ID":"35d8f08f-4c57-44e0-8e8f-3969287e2a5a","Type":"ContainerStarted","Data":"a44d9eb65400a7e0c0da7a14a1ecf19a155dd4cc1a996834044260457aba64a9"} Mar 18 13:11:14.124622 master-0 kubenswrapper[7146]: I0318 13:11:14.124557 7146 scope.go:117] "RemoveContainer" containerID="899460e45f82d95897613445afb3c5be2cc8dcbea4246a3823b8133d56c197e4" Mar 18 13:11:14.153934 master-0 kubenswrapper[7146]: I0318 13:11:14.153903 7146 scope.go:117] "RemoveContainer" containerID="d16347f3e9ba2d5bd6ec0c2072d5dd188dabafd7872c967eafdae811def53a67" Mar 18 13:11:14.536223 master-0 kubenswrapper[7146]: I0318 13:11:14.536154 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s6vkz"] Mar 18 13:11:15.101493 master-0 kubenswrapper[7146]: I0318 13:11:15.101418 7146 generic.go:334] "Generic (PLEG): container finished" podID="35d8f08f-4c57-44e0-8e8f-3969287e2a5a" containerID="d2897cc2c8562aeaec2aa9acaf8c187af617a13c66e8bd4ee5d5cb3869d53d9c" exitCode=0 Mar 18 13:11:15.101493 master-0 kubenswrapper[7146]: I0318 13:11:15.101464 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-459lq" event={"ID":"35d8f08f-4c57-44e0-8e8f-3969287e2a5a","Type":"ContainerDied","Data":"d2897cc2c8562aeaec2aa9acaf8c187af617a13c66e8bd4ee5d5cb3869d53d9c"} Mar 18 13:11:15.375415 master-0 kubenswrapper[7146]: I0318 13:11:15.375168 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2df6721-ccd2-41e5-bfd5-bd8d277dd57a" path="/var/lib/kubelet/pods/e2df6721-ccd2-41e5-bfd5-bd8d277dd57a/volumes" Mar 18 13:11:15.408136 master-0 kubenswrapper[7146]: I0318 13:11:15.408073 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s6vkz"] Mar 18 13:11:17.114501 master-0 kubenswrapper[7146]: I0318 13:11:17.114417 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p546b" event={"ID":"2e0fa133-60e7-47d0-996e-7e85aef2a218","Type":"ContainerStarted","Data":"74e87e2d8aa52c9ca61ff8069429d529a38bba095453380dfa6cfb95b7f1f1b4"} Mar 18 13:11:17.364658 master-0 kubenswrapper[7146]: I0318 13:11:17.364485 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeed4251-c92a-49e9-a785-9903d84ca0d6" path="/var/lib/kubelet/pods/aeed4251-c92a-49e9-a785-9903d84ca0d6/volumes" Mar 18 13:11:17.638368 master-0 kubenswrapper[7146]: I0318 13:11:17.638227 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p546b" podStartSLOduration=3.770650925 podStartE2EDuration="7.638205893s" podCreationTimestamp="2026-03-18 13:11:10 +0000 UTC" firstStartedPulling="2026-03-18 13:11:12.027952085 +0000 UTC m=+180.836169456" lastFinishedPulling="2026-03-18 13:11:15.895507063 +0000 UTC m=+184.703724424" observedRunningTime="2026-03-18 13:11:17.634540858 +0000 UTC m=+186.442758219" watchObservedRunningTime="2026-03-18 13:11:17.638205893 +0000 UTC m=+186.446423254" Mar 18 13:11:18.121647 master-0 kubenswrapper[7146]: I0318 13:11:18.121563 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-459lq" event={"ID":"35d8f08f-4c57-44e0-8e8f-3969287e2a5a","Type":"ContainerStarted","Data":"ab2a031030fcae05fc3de61ba8959c18a5ad439c27b9db65dec83eb634e7acf2"} Mar 18 13:11:18.123927 master-0 kubenswrapper[7146]: I0318 13:11:18.123861 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7pj2" event={"ID":"d2cf9274-25d2-4576-bbef-1d416dfff0a9","Type":"ContainerStarted","Data":"db73a77a31c8b1e864924b98296d985e4ebe8a8cec9a1770fc0976a7285d12ff"} Mar 18 13:11:18.126535 master-0 kubenswrapper[7146]: I0318 13:11:18.126468 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwvw" event={"ID":"317a89ea-e9dd-4167-8568-bb36e2431015","Type":"ContainerStarted","Data":"66c09a1be84202c228ee34f3b5d3ed0e1ba89cc96e863cbacdecd6906056669b"} Mar 18 13:11:18.197072 master-0 kubenswrapper[7146]: I0318 13:11:18.196775 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nhwvw" podStartSLOduration=2.551272318 podStartE2EDuration="8.196751974s" podCreationTimestamp="2026-03-18 13:11:10 +0000 UTC" firstStartedPulling="2026-03-18 13:11:12.033237127 +0000 UTC m=+180.841454488" lastFinishedPulling="2026-03-18 13:11:17.678716793 +0000 UTC m=+186.486934144" observedRunningTime="2026-03-18 13:11:18.177271081 +0000 UTC m=+186.985488442" watchObservedRunningTime="2026-03-18 13:11:18.196751974 +0000 UTC m=+187.004969335" Mar 18 13:11:19.134114 master-0 kubenswrapper[7146]: I0318 13:11:19.134041 7146 generic.go:334] "Generic (PLEG): container finished" podID="35d8f08f-4c57-44e0-8e8f-3969287e2a5a" containerID="ab2a031030fcae05fc3de61ba8959c18a5ad439c27b9db65dec83eb634e7acf2" exitCode=0 Mar 18 13:11:19.134901 master-0 kubenswrapper[7146]: I0318 13:11:19.134151 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-459lq" event={"ID":"35d8f08f-4c57-44e0-8e8f-3969287e2a5a","Type":"ContainerDied","Data":"ab2a031030fcae05fc3de61ba8959c18a5ad439c27b9db65dec83eb634e7acf2"} Mar 18 13:11:19.137979 master-0 kubenswrapper[7146]: I0318 13:11:19.137887 7146 generic.go:334] "Generic (PLEG): container finished" podID="d2cf9274-25d2-4576-bbef-1d416dfff0a9" containerID="db73a77a31c8b1e864924b98296d985e4ebe8a8cec9a1770fc0976a7285d12ff" exitCode=0 Mar 18 13:11:19.139538 master-0 kubenswrapper[7146]: I0318 13:11:19.139486 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7pj2" event={"ID":"d2cf9274-25d2-4576-bbef-1d416dfff0a9","Type":"ContainerDied","Data":"db73a77a31c8b1e864924b98296d985e4ebe8a8cec9a1770fc0976a7285d12ff"} Mar 18 13:11:20.147076 master-0 kubenswrapper[7146]: I0318 13:11:20.147023 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-459lq" event={"ID":"35d8f08f-4c57-44e0-8e8f-3969287e2a5a","Type":"ContainerStarted","Data":"5209e28398300f1bf5ab8ae47e35128b44ab7d0283aa50bf2137f838e3a38082"} Mar 18 13:11:20.149054 master-0 kubenswrapper[7146]: I0318 13:11:20.149005 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7pj2" event={"ID":"d2cf9274-25d2-4576-bbef-1d416dfff0a9","Type":"ContainerStarted","Data":"bc5c1e816497f8cbc7cb2718088de141c80b1b539c9b3e2af0e4499989a0ed3e"} Mar 18 13:11:20.172017 master-0 kubenswrapper[7146]: I0318 13:11:20.171903 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-459lq" podStartSLOduration=2.617941971 podStartE2EDuration="7.171864461s" podCreationTimestamp="2026-03-18 13:11:13 +0000 UTC" firstStartedPulling="2026-03-18 13:11:15.102904416 +0000 UTC m=+183.911121777" lastFinishedPulling="2026-03-18 13:11:19.656826906 +0000 UTC m=+188.465044267" observedRunningTime="2026-03-18 13:11:20.168618318 +0000 UTC m=+188.976835709" watchObservedRunningTime="2026-03-18 13:11:20.171864461 +0000 UTC m=+188.980081822" Mar 18 13:11:20.672781 master-0 kubenswrapper[7146]: I0318 13:11:20.672700 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:11:20.672781 master-0 kubenswrapper[7146]: I0318 13:11:20.672773 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:11:20.714231 master-0 kubenswrapper[7146]: I0318 13:11:20.714163 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:11:20.714231 master-0 kubenswrapper[7146]: I0318 13:11:20.714234 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:11:20.724495 master-0 kubenswrapper[7146]: I0318 13:11:20.724449 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:11:20.744042 master-0 kubenswrapper[7146]: I0318 13:11:20.743961 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d7pj2" podStartSLOduration=2.155249168 podStartE2EDuration="7.743923883s" podCreationTimestamp="2026-03-18 13:11:13 +0000 UTC" firstStartedPulling="2026-03-18 13:11:14.079336974 +0000 UTC m=+182.887554335" lastFinishedPulling="2026-03-18 13:11:19.668011689 +0000 UTC m=+188.476229050" observedRunningTime="2026-03-18 13:11:20.188634745 +0000 UTC m=+188.996852116" watchObservedRunningTime="2026-03-18 13:11:20.743923883 +0000 UTC m=+189.552141254" Mar 18 13:11:20.752422 master-0 kubenswrapper[7146]: I0318 13:11:20.752383 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:11:21.187772 master-0 kubenswrapper[7146]: I0318 13:11:21.187733 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:11:23.476068 master-0 kubenswrapper[7146]: I0318 13:11:23.475989 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:11:23.476726 master-0 kubenswrapper[7146]: I0318 13:11:23.476092 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:11:23.528773 master-0 kubenswrapper[7146]: I0318 13:11:23.528725 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:11:23.601222 master-0 kubenswrapper[7146]: I0318 13:11:23.601130 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:11:23.601222 master-0 kubenswrapper[7146]: I0318 13:11:23.601218 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:11:24.209343 master-0 kubenswrapper[7146]: I0318 13:11:24.209301 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:11:24.640909 master-0 kubenswrapper[7146]: I0318 13:11:24.640780 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-459lq" podUID="35d8f08f-4c57-44e0-8e8f-3969287e2a5a" containerName="registry-server" probeResult="failure" output=< Mar 18 13:11:24.640909 master-0 kubenswrapper[7146]: timeout: failed to connect service ":50051" within 1s Mar 18 13:11:24.640909 master-0 kubenswrapper[7146]: > Mar 18 13:11:24.747962 master-0 kubenswrapper[7146]: I0318 13:11:24.741909 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v"] Mar 18 13:11:24.747962 master-0 kubenswrapper[7146]: E0318 13:11:24.744863 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeed4251-c92a-49e9-a785-9903d84ca0d6" containerName="extract-utilities" Mar 18 13:11:24.747962 master-0 kubenswrapper[7146]: I0318 13:11:24.744887 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeed4251-c92a-49e9-a785-9903d84ca0d6" containerName="extract-utilities" Mar 18 13:11:24.747962 master-0 kubenswrapper[7146]: E0318 13:11:24.744902 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeed4251-c92a-49e9-a785-9903d84ca0d6" containerName="registry-server" Mar 18 13:11:24.747962 master-0 kubenswrapper[7146]: I0318 13:11:24.744911 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeed4251-c92a-49e9-a785-9903d84ca0d6" containerName="registry-server" Mar 18 13:11:24.747962 master-0 kubenswrapper[7146]: E0318 13:11:24.744957 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeed4251-c92a-49e9-a785-9903d84ca0d6" containerName="extract-content" Mar 18 13:11:24.747962 master-0 kubenswrapper[7146]: I0318 13:11:24.744966 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeed4251-c92a-49e9-a785-9903d84ca0d6" containerName="extract-content" Mar 18 13:11:24.747962 master-0 kubenswrapper[7146]: I0318 13:11:24.745087 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeed4251-c92a-49e9-a785-9903d84ca0d6" containerName="registry-server" Mar 18 13:11:24.747962 master-0 kubenswrapper[7146]: I0318 13:11:24.745787 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:11:24.759959 master-0 kubenswrapper[7146]: I0318 13:11:24.753233 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 13:11:24.759959 master-0 kubenswrapper[7146]: I0318 13:11:24.753512 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-spdqf" Mar 18 13:11:24.759959 master-0 kubenswrapper[7146]: I0318 13:11:24.753701 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 13:11:24.759959 master-0 kubenswrapper[7146]: I0318 13:11:24.753849 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6"] Mar 18 13:11:24.759959 master-0 kubenswrapper[7146]: I0318 13:11:24.754489 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 13:11:24.774070 master-0 kubenswrapper[7146]: I0318 13:11:24.773428 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" podUID="24935b14-2768-435e-8ed1-73ecac4e05d8" containerName="kube-rbac-proxy" containerID="cri-o://5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b" gracePeriod=30 Mar 18 13:11:24.774070 master-0 kubenswrapper[7146]: I0318 13:11:24.773562 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" podUID="24935b14-2768-435e-8ed1-73ecac4e05d8" containerName="machine-approver-controller" containerID="cri-o://37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb" gracePeriod=30 Mar 18 13:11:24.779810 master-0 kubenswrapper[7146]: I0318 13:11:24.779778 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd"] Mar 18 13:11:24.780870 master-0 kubenswrapper[7146]: I0318 13:11:24.780827 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:11:24.792140 master-0 kubenswrapper[7146]: I0318 13:11:24.790126 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v"] Mar 18 13:11:24.792140 master-0 kubenswrapper[7146]: I0318 13:11:24.791444 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 13:11:24.792140 master-0 kubenswrapper[7146]: I0318 13:11:24.791856 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 13:11:24.792140 master-0 kubenswrapper[7146]: I0318 13:11:24.791887 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 13:11:24.792140 master-0 kubenswrapper[7146]: I0318 13:11:24.791930 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 13:11:24.792140 master-0 kubenswrapper[7146]: I0318 13:11:24.792022 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-2vnp2" Mar 18 13:11:24.796124 master-0 kubenswrapper[7146]: I0318 13:11:24.796053 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-ckwz8"] Mar 18 13:11:24.798264 master-0 kubenswrapper[7146]: I0318 13:11:24.798191 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:11:24.806599 master-0 kubenswrapper[7146]: I0318 13:11:24.805787 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 13:11:24.807439 master-0 kubenswrapper[7146]: I0318 13:11:24.807216 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 13:11:24.809043 master-0 kubenswrapper[7146]: I0318 13:11:24.807641 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 13:11:24.811620 master-0 kubenswrapper[7146]: I0318 13:11:24.811464 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 13:11:24.814434 master-0 kubenswrapper[7146]: I0318 13:11:24.813196 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 13:11:24.814434 master-0 kubenswrapper[7146]: I0318 13:11:24.813365 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-2bvqk" Mar 18 13:11:24.822332 master-0 kubenswrapper[7146]: I0318 13:11:24.822286 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 13:11:24.827325 master-0 kubenswrapper[7146]: I0318 13:11:24.826151 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd"] Mar 18 13:11:24.834790 master-0 kubenswrapper[7146]: I0318 13:11:24.833306 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-ckwz8"] Mar 18 13:11:24.883424 master-0 kubenswrapper[7146]: I0318 13:11:24.883224 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:11:24.883424 master-0 kubenswrapper[7146]: I0318 13:11:24.883279 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-images\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:11:24.883424 master-0 kubenswrapper[7146]: I0318 13:11:24.883300 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c074751c-6b3c-44df-aca5-42fa69662779-serving-cert\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:11:24.883424 master-0 kubenswrapper[7146]: I0318 13:11:24.883322 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2385db6b-4286-4839-822c-aa9c52290172-proxy-tls\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:11:24.883424 master-0 kubenswrapper[7146]: I0318 13:11:24.883342 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d27hr\" (UniqueName: \"kubernetes.io/projected/2385db6b-4286-4839-822c-aa9c52290172-kube-api-access-d27hr\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:11:24.883424 master-0 kubenswrapper[7146]: I0318 13:11:24.883359 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbztv\" (UniqueName: \"kubernetes.io/projected/c074751c-6b3c-44df-aca5-42fa69662779-kube-api-access-bbztv\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:11:24.883424 master-0 kubenswrapper[7146]: I0318 13:11:24.883375 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:11:24.883424 master-0 kubenswrapper[7146]: I0318 13:11:24.883413 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:11:24.883424 master-0 kubenswrapper[7146]: I0318 13:11:24.883435 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c074751c-6b3c-44df-aca5-42fa69662779-snapshots\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:11:24.883907 master-0 kubenswrapper[7146]: I0318 13:11:24.883452 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-config\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:11:24.883907 master-0 kubenswrapper[7146]: I0318 13:11:24.883472 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:11:24.883907 master-0 kubenswrapper[7146]: I0318 13:11:24.883497 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-images\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:11:24.883907 master-0 kubenswrapper[7146]: I0318 13:11:24.883517 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dvd5\" (UniqueName: \"kubernetes.io/projected/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-kube-api-access-5dvd5\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:11:24.984955 master-0 kubenswrapper[7146]: I0318 13:11:24.984866 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-images\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:11:24.985146 master-0 kubenswrapper[7146]: I0318 13:11:24.984971 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c074751c-6b3c-44df-aca5-42fa69662779-serving-cert\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:11:24.985146 master-0 kubenswrapper[7146]: I0318 13:11:24.985002 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2385db6b-4286-4839-822c-aa9c52290172-proxy-tls\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:11:24.985146 master-0 kubenswrapper[7146]: I0318 13:11:24.985034 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d27hr\" (UniqueName: \"kubernetes.io/projected/2385db6b-4286-4839-822c-aa9c52290172-kube-api-access-d27hr\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:11:24.985146 master-0 kubenswrapper[7146]: I0318 13:11:24.985065 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbztv\" (UniqueName: \"kubernetes.io/projected/c074751c-6b3c-44df-aca5-42fa69662779-kube-api-access-bbztv\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:11:24.985146 master-0 kubenswrapper[7146]: I0318 13:11:24.985086 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:11:24.985146 master-0 kubenswrapper[7146]: I0318 13:11:24.985134 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:11:24.985323 master-0 kubenswrapper[7146]: I0318 13:11:24.985171 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c074751c-6b3c-44df-aca5-42fa69662779-snapshots\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:11:24.985323 master-0 kubenswrapper[7146]: I0318 13:11:24.985191 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-config\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:11:24.985323 master-0 kubenswrapper[7146]: I0318 13:11:24.985210 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:11:24.985323 master-0 kubenswrapper[7146]: I0318 13:11:24.985238 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-images\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:11:24.985323 master-0 kubenswrapper[7146]: I0318 13:11:24.985259 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dvd5\" (UniqueName: \"kubernetes.io/projected/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-kube-api-access-5dvd5\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:11:24.985323 master-0 kubenswrapper[7146]: I0318 13:11:24.985283 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:11:24.986594 master-0 kubenswrapper[7146]: I0318 13:11:24.986210 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:11:24.986658 master-0 kubenswrapper[7146]: I0318 13:11:24.986600 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c074751c-6b3c-44df-aca5-42fa69662779-snapshots\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:11:24.987085 master-0 kubenswrapper[7146]: I0318 13:11:24.987046 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-config\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:11:24.987691 master-0 kubenswrapper[7146]: I0318 13:11:24.987666 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-images\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:11:24.987790 master-0 kubenswrapper[7146]: I0318 13:11:24.987770 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-images\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:11:24.989866 master-0 kubenswrapper[7146]: I0318 13:11:24.989826 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:11:24.994664 master-0 kubenswrapper[7146]: I0318 13:11:24.994597 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:11:24.995253 master-0 kubenswrapper[7146]: I0318 13:11:24.995224 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:11:24.996749 master-0 kubenswrapper[7146]: I0318 13:11:24.996722 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c074751c-6b3c-44df-aca5-42fa69662779-serving-cert\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:11:25.003094 master-0 kubenswrapper[7146]: I0318 13:11:25.003011 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2385db6b-4286-4839-822c-aa9c52290172-proxy-tls\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:11:25.007429 master-0 kubenswrapper[7146]: I0318 13:11:25.007392 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" Mar 18 13:11:25.064630 master-0 kubenswrapper[7146]: I0318 13:11:25.064581 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d27hr\" (UniqueName: \"kubernetes.io/projected/2385db6b-4286-4839-822c-aa9c52290172-kube-api-access-d27hr\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:11:25.064823 master-0 kubenswrapper[7146]: I0318 13:11:25.064734 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dvd5\" (UniqueName: \"kubernetes.io/projected/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-kube-api-access-5dvd5\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:11:25.066982 master-0 kubenswrapper[7146]: I0318 13:11:25.064925 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbztv\" (UniqueName: \"kubernetes.io/projected/c074751c-6b3c-44df-aca5-42fa69662779-kube-api-access-bbztv\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:11:25.086271 master-0 kubenswrapper[7146]: I0318 13:11:25.086214 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/24935b14-2768-435e-8ed1-73ecac4e05d8-machine-approver-tls\") pod \"24935b14-2768-435e-8ed1-73ecac4e05d8\" (UID: \"24935b14-2768-435e-8ed1-73ecac4e05d8\") " Mar 18 13:11:25.086497 master-0 kubenswrapper[7146]: I0318 13:11:25.086294 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24935b14-2768-435e-8ed1-73ecac4e05d8-auth-proxy-config\") pod \"24935b14-2768-435e-8ed1-73ecac4e05d8\" (UID: \"24935b14-2768-435e-8ed1-73ecac4e05d8\") " Mar 18 13:11:25.086497 master-0 kubenswrapper[7146]: I0318 13:11:25.086369 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fljgc\" (UniqueName: \"kubernetes.io/projected/24935b14-2768-435e-8ed1-73ecac4e05d8-kube-api-access-fljgc\") pod \"24935b14-2768-435e-8ed1-73ecac4e05d8\" (UID: \"24935b14-2768-435e-8ed1-73ecac4e05d8\") " Mar 18 13:11:25.086497 master-0 kubenswrapper[7146]: I0318 13:11:25.086436 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24935b14-2768-435e-8ed1-73ecac4e05d8-config\") pod \"24935b14-2768-435e-8ed1-73ecac4e05d8\" (UID: \"24935b14-2768-435e-8ed1-73ecac4e05d8\") " Mar 18 13:11:25.087508 master-0 kubenswrapper[7146]: I0318 13:11:25.087131 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24935b14-2768-435e-8ed1-73ecac4e05d8-config" (OuterVolumeSpecName: "config") pod "24935b14-2768-435e-8ed1-73ecac4e05d8" (UID: "24935b14-2768-435e-8ed1-73ecac4e05d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:11:25.087508 master-0 kubenswrapper[7146]: I0318 13:11:25.087459 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24935b14-2768-435e-8ed1-73ecac4e05d8-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "24935b14-2768-435e-8ed1-73ecac4e05d8" (UID: "24935b14-2768-435e-8ed1-73ecac4e05d8"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:11:25.089352 master-0 kubenswrapper[7146]: I0318 13:11:25.089322 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24935b14-2768-435e-8ed1-73ecac4e05d8-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "24935b14-2768-435e-8ed1-73ecac4e05d8" (UID: "24935b14-2768-435e-8ed1-73ecac4e05d8"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:11:25.090536 master-0 kubenswrapper[7146]: I0318 13:11:25.090505 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24935b14-2768-435e-8ed1-73ecac4e05d8-kube-api-access-fljgc" (OuterVolumeSpecName: "kube-api-access-fljgc") pod "24935b14-2768-435e-8ed1-73ecac4e05d8" (UID: "24935b14-2768-435e-8ed1-73ecac4e05d8"). InnerVolumeSpecName "kube-api-access-fljgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:11:25.113143 master-0 kubenswrapper[7146]: I0318 13:11:25.113085 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:11:25.145046 master-0 kubenswrapper[7146]: I0318 13:11:25.144397 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:11:25.177562 master-0 kubenswrapper[7146]: I0318 13:11:25.177498 7146 generic.go:334] "Generic (PLEG): container finished" podID="24935b14-2768-435e-8ed1-73ecac4e05d8" containerID="37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb" exitCode=0 Mar 18 13:11:25.177562 master-0 kubenswrapper[7146]: I0318 13:11:25.177530 7146 generic.go:334] "Generic (PLEG): container finished" podID="24935b14-2768-435e-8ed1-73ecac4e05d8" containerID="5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b" exitCode=0 Mar 18 13:11:25.178194 master-0 kubenswrapper[7146]: I0318 13:11:25.178163 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" Mar 18 13:11:25.182184 master-0 kubenswrapper[7146]: I0318 13:11:25.181150 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" event={"ID":"24935b14-2768-435e-8ed1-73ecac4e05d8","Type":"ContainerDied","Data":"37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb"} Mar 18 13:11:25.182184 master-0 kubenswrapper[7146]: I0318 13:11:25.181287 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" event={"ID":"24935b14-2768-435e-8ed1-73ecac4e05d8","Type":"ContainerDied","Data":"5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b"} Mar 18 13:11:25.182184 master-0 kubenswrapper[7146]: I0318 13:11:25.181307 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6" event={"ID":"24935b14-2768-435e-8ed1-73ecac4e05d8","Type":"ContainerDied","Data":"90b5f2bab5d48d375ec84dcad33a3cefcbba375c32cba3bc75e2670a6864dd98"} Mar 18 13:11:25.182184 master-0 kubenswrapper[7146]: I0318 13:11:25.181335 7146 scope.go:117] "RemoveContainer" containerID="37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb" Mar 18 13:11:25.187648 master-0 kubenswrapper[7146]: I0318 13:11:25.187594 7146 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24935b14-2768-435e-8ed1-73ecac4e05d8-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:25.187648 master-0 kubenswrapper[7146]: I0318 13:11:25.187639 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fljgc\" (UniqueName: \"kubernetes.io/projected/24935b14-2768-435e-8ed1-73ecac4e05d8-kube-api-access-fljgc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:25.187648 master-0 kubenswrapper[7146]: I0318 13:11:25.187650 7146 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24935b14-2768-435e-8ed1-73ecac4e05d8-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:25.187648 master-0 kubenswrapper[7146]: I0318 13:11:25.187658 7146 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/24935b14-2768-435e-8ed1-73ecac4e05d8-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:25.203016 master-0 kubenswrapper[7146]: I0318 13:11:25.202926 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:11:25.214815 master-0 kubenswrapper[7146]: I0318 13:11:25.214780 7146 scope.go:117] "RemoveContainer" containerID="5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b" Mar 18 13:11:25.239713 master-0 kubenswrapper[7146]: I0318 13:11:25.239480 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6"] Mar 18 13:11:25.309962 master-0 kubenswrapper[7146]: I0318 13:11:25.305147 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-nkkt6"] Mar 18 13:11:25.314163 master-0 kubenswrapper[7146]: I0318 13:11:25.314118 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2"] Mar 18 13:11:25.314361 master-0 kubenswrapper[7146]: E0318 13:11:25.314338 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24935b14-2768-435e-8ed1-73ecac4e05d8" containerName="kube-rbac-proxy" Mar 18 13:11:25.314361 master-0 kubenswrapper[7146]: I0318 13:11:25.314354 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="24935b14-2768-435e-8ed1-73ecac4e05d8" containerName="kube-rbac-proxy" Mar 18 13:11:25.314464 master-0 kubenswrapper[7146]: E0318 13:11:25.314369 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24935b14-2768-435e-8ed1-73ecac4e05d8" containerName="machine-approver-controller" Mar 18 13:11:25.314464 master-0 kubenswrapper[7146]: I0318 13:11:25.314377 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="24935b14-2768-435e-8ed1-73ecac4e05d8" containerName="machine-approver-controller" Mar 18 13:11:25.315140 master-0 kubenswrapper[7146]: I0318 13:11:25.314977 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="24935b14-2768-435e-8ed1-73ecac4e05d8" containerName="machine-approver-controller" Mar 18 13:11:25.315140 master-0 kubenswrapper[7146]: I0318 13:11:25.314999 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="24935b14-2768-435e-8ed1-73ecac4e05d8" containerName="kube-rbac-proxy" Mar 18 13:11:25.315587 master-0 kubenswrapper[7146]: I0318 13:11:25.315561 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:11:25.348016 master-0 kubenswrapper[7146]: I0318 13:11:25.342659 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 13:11:25.348016 master-0 kubenswrapper[7146]: I0318 13:11:25.342892 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 13:11:25.348016 master-0 kubenswrapper[7146]: I0318 13:11:25.343102 7146 scope.go:117] "RemoveContainer" containerID="37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb" Mar 18 13:11:25.348016 master-0 kubenswrapper[7146]: I0318 13:11:25.343118 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-crbnv" Mar 18 13:11:25.348016 master-0 kubenswrapper[7146]: I0318 13:11:25.343141 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 13:11:25.348016 master-0 kubenswrapper[7146]: E0318 13:11:25.346086 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb\": container with ID starting with 37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb not found: ID does not exist" containerID="37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb" Mar 18 13:11:25.348016 master-0 kubenswrapper[7146]: I0318 13:11:25.346184 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb"} err="failed to get container status \"37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb\": rpc error: code = NotFound desc = could not find container \"37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb\": container with ID starting with 37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb not found: ID does not exist" Mar 18 13:11:25.348016 master-0 kubenswrapper[7146]: I0318 13:11:25.346221 7146 scope.go:117] "RemoveContainer" containerID="5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b" Mar 18 13:11:25.348016 master-0 kubenswrapper[7146]: I0318 13:11:25.346432 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 13:11:25.348016 master-0 kubenswrapper[7146]: I0318 13:11:25.346458 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 13:11:25.365003 master-0 kubenswrapper[7146]: E0318 13:11:25.363565 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b\": container with ID starting with 5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b not found: ID does not exist" containerID="5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b" Mar 18 13:11:25.365003 master-0 kubenswrapper[7146]: I0318 13:11:25.363619 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b"} err="failed to get container status \"5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b\": rpc error: code = NotFound desc = could not find container \"5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b\": container with ID starting with 5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b not found: ID does not exist" Mar 18 13:11:25.365003 master-0 kubenswrapper[7146]: I0318 13:11:25.363649 7146 scope.go:117] "RemoveContainer" containerID="37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb" Mar 18 13:11:25.365003 master-0 kubenswrapper[7146]: I0318 13:11:25.364752 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb"} err="failed to get container status \"37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb\": rpc error: code = NotFound desc = could not find container \"37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb\": container with ID starting with 37d598af31e9742045e943d48816553143ebb82670be99b1026bc3256c830dbb not found: ID does not exist" Mar 18 13:11:25.365003 master-0 kubenswrapper[7146]: I0318 13:11:25.364771 7146 scope.go:117] "RemoveContainer" containerID="5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b" Mar 18 13:11:25.365356 master-0 kubenswrapper[7146]: I0318 13:11:25.365257 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b"} err="failed to get container status \"5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b\": rpc error: code = NotFound desc = could not find container \"5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b\": container with ID starting with 5e427bca70f9cb972ec6dc92c89edb9505fb47f2af25422e7005f51becc8bf7b not found: ID does not exist" Mar 18 13:11:25.390725 master-0 kubenswrapper[7146]: I0318 13:11:25.390308 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/734f9f10-5bde-44d5-a831-021b93fd667d-machine-approver-tls\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:11:25.390725 master-0 kubenswrapper[7146]: I0318 13:11:25.390498 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq596\" (UniqueName: \"kubernetes.io/projected/734f9f10-5bde-44d5-a831-021b93fd667d-kube-api-access-mq596\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:11:25.390725 master-0 kubenswrapper[7146]: I0318 13:11:25.390546 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-auth-proxy-config\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:11:25.390725 master-0 kubenswrapper[7146]: I0318 13:11:25.390588 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-config\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:11:25.398891 master-0 kubenswrapper[7146]: I0318 13:11:25.397848 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24935b14-2768-435e-8ed1-73ecac4e05d8" path="/var/lib/kubelet/pods/24935b14-2768-435e-8ed1-73ecac4e05d8/volumes" Mar 18 13:11:25.491825 master-0 kubenswrapper[7146]: I0318 13:11:25.491773 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/734f9f10-5bde-44d5-a831-021b93fd667d-machine-approver-tls\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:11:25.492030 master-0 kubenswrapper[7146]: I0318 13:11:25.491854 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq596\" (UniqueName: \"kubernetes.io/projected/734f9f10-5bde-44d5-a831-021b93fd667d-kube-api-access-mq596\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:11:25.492030 master-0 kubenswrapper[7146]: I0318 13:11:25.491888 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-auth-proxy-config\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:11:25.492030 master-0 kubenswrapper[7146]: I0318 13:11:25.491924 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-config\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:11:25.492530 master-0 kubenswrapper[7146]: I0318 13:11:25.492495 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-config\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:11:25.493611 master-0 kubenswrapper[7146]: I0318 13:11:25.493579 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-auth-proxy-config\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:11:25.495422 master-0 kubenswrapper[7146]: I0318 13:11:25.495385 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/734f9f10-5bde-44d5-a831-021b93fd667d-machine-approver-tls\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:11:25.515057 master-0 kubenswrapper[7146]: I0318 13:11:25.514261 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq596\" (UniqueName: \"kubernetes.io/projected/734f9f10-5bde-44d5-a831-021b93fd667d-kube-api-access-mq596\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:11:25.586398 master-0 kubenswrapper[7146]: I0318 13:11:25.586280 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-ckwz8"] Mar 18 13:11:25.648322 master-0 kubenswrapper[7146]: I0318 13:11:25.648279 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v"] Mar 18 13:11:25.651396 master-0 kubenswrapper[7146]: W0318 13:11:25.651333 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2e2ef3a_a6e9_44dc_93c7_9f533e75502a.slice/crio-d74caa04ea7449a0740efe2024b1988d41d0ee2f12b8a3006dbde07602a641f4 WatchSource:0}: Error finding container d74caa04ea7449a0740efe2024b1988d41d0ee2f12b8a3006dbde07602a641f4: Status 404 returned error can't find the container with id d74caa04ea7449a0740efe2024b1988d41d0ee2f12b8a3006dbde07602a641f4 Mar 18 13:11:25.706732 master-0 kubenswrapper[7146]: I0318 13:11:25.706670 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:11:25.723623 master-0 kubenswrapper[7146]: I0318 13:11:25.723582 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd"] Mar 18 13:11:25.740088 master-0 kubenswrapper[7146]: W0318 13:11:25.739332 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod734f9f10_5bde_44d5_a831_021b93fd667d.slice/crio-087a43ea54c2e2fbe1816f2c58b08071e419e9384fd7fc0a1f0284ded4111e9a WatchSource:0}: Error finding container 087a43ea54c2e2fbe1816f2c58b08071e419e9384fd7fc0a1f0284ded4111e9a: Status 404 returned error can't find the container with id 087a43ea54c2e2fbe1816f2c58b08071e419e9384fd7fc0a1f0284ded4111e9a Mar 18 13:11:26.190544 master-0 kubenswrapper[7146]: I0318 13:11:26.190400 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" event={"ID":"2385db6b-4286-4839-822c-aa9c52290172","Type":"ContainerStarted","Data":"13810da49e2cffbdae8184d949848aeb737e74e92204d109458ebc1563642f36"} Mar 18 13:11:26.190544 master-0 kubenswrapper[7146]: I0318 13:11:26.190450 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" event={"ID":"2385db6b-4286-4839-822c-aa9c52290172","Type":"ContainerStarted","Data":"76706e531d703321ab797434284e0ec77d46262c1f93022a12f301f5e424b532"} Mar 18 13:11:26.190544 master-0 kubenswrapper[7146]: I0318 13:11:26.190463 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" event={"ID":"2385db6b-4286-4839-822c-aa9c52290172","Type":"ContainerStarted","Data":"ae165efde01e25d890b70e74ec7c26c2fa71fdd6d466511fae93c4948c21b840"} Mar 18 13:11:26.197676 master-0 kubenswrapper[7146]: I0318 13:11:26.197627 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" event={"ID":"734f9f10-5bde-44d5-a831-021b93fd667d","Type":"ContainerStarted","Data":"4eb80f598ce47d38e570eddb21d014faaf9d873a484757e2336dff55ebecdc96"} Mar 18 13:11:26.197676 master-0 kubenswrapper[7146]: I0318 13:11:26.197672 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" event={"ID":"734f9f10-5bde-44d5-a831-021b93fd667d","Type":"ContainerStarted","Data":"087a43ea54c2e2fbe1816f2c58b08071e419e9384fd7fc0a1f0284ded4111e9a"} Mar 18 13:11:26.202027 master-0 kubenswrapper[7146]: I0318 13:11:26.201976 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" event={"ID":"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a","Type":"ContainerStarted","Data":"53981f9fd50c59d2630dd1cf2e852cdd7411f984a8becb9009659357450852a0"} Mar 18 13:11:26.202027 master-0 kubenswrapper[7146]: I0318 13:11:26.202021 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" event={"ID":"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a","Type":"ContainerStarted","Data":"d74caa04ea7449a0740efe2024b1988d41d0ee2f12b8a3006dbde07602a641f4"} Mar 18 13:11:26.203325 master-0 kubenswrapper[7146]: I0318 13:11:26.203271 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" event={"ID":"c074751c-6b3c-44df-aca5-42fa69662779","Type":"ContainerStarted","Data":"a890ba92b025096e34e81f53a6cf37b1fcac472b14f9584479797572ac09eeb3"} Mar 18 13:11:26.217482 master-0 kubenswrapper[7146]: I0318 13:11:26.217352 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" podStartSLOduration=2.217324429 podStartE2EDuration="2.217324429s" podCreationTimestamp="2026-03-18 13:11:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:11:26.210025369 +0000 UTC m=+195.018242760" watchObservedRunningTime="2026-03-18 13:11:26.217324429 +0000 UTC m=+195.025541810" Mar 18 13:11:27.244248 master-0 kubenswrapper[7146]: I0318 13:11:27.244111 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" event={"ID":"734f9f10-5bde-44d5-a831-021b93fd667d","Type":"ContainerStarted","Data":"bf9efcefa6211001d8f08607f67b510663e50278def7ed0ac4963e0d3210e802"} Mar 18 13:11:27.313414 master-0 kubenswrapper[7146]: I0318 13:11:27.313337 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" podStartSLOduration=2.313320823 podStartE2EDuration="2.313320823s" podCreationTimestamp="2026-03-18 13:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:11:27.311158451 +0000 UTC m=+196.119375832" watchObservedRunningTime="2026-03-18 13:11:27.313320823 +0000 UTC m=+196.121538184" Mar 18 13:11:30.760169 master-0 kubenswrapper[7146]: I0318 13:11:30.760123 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:11:32.275036 master-0 kubenswrapper[7146]: I0318 13:11:32.274973 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj"] Mar 18 13:11:32.287580 master-0 kubenswrapper[7146]: I0318 13:11:32.284846 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" podUID="fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" containerName="cluster-cloud-controller-manager" containerID="cri-o://2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e" gracePeriod=30 Mar 18 13:11:32.287924 master-0 kubenswrapper[7146]: I0318 13:11:32.284895 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" podUID="fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" containerName="kube-rbac-proxy" containerID="cri-o://f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65" gracePeriod=30 Mar 18 13:11:32.288111 master-0 kubenswrapper[7146]: I0318 13:11:32.285049 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" podUID="fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" containerName="config-sync-controllers" containerID="cri-o://bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c" gracePeriod=30 Mar 18 13:11:32.324133 master-0 kubenswrapper[7146]: I0318 13:11:32.324097 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-2qjl7"] Mar 18 13:11:32.326508 master-0 kubenswrapper[7146]: I0318 13:11:32.326473 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:11:32.331099 master-0 kubenswrapper[7146]: I0318 13:11:32.331052 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 13:11:32.336899 master-0 kubenswrapper[7146]: I0318 13:11:32.335565 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-xvxxf" Mar 18 13:11:32.427988 master-0 kubenswrapper[7146]: I0318 13:11:32.427887 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c106be-27ea-4849-b365-eff6d25f5e71-mcd-auth-proxy-config\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:11:32.428220 master-0 kubenswrapper[7146]: I0318 13:11:32.428183 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f3c106be-27ea-4849-b365-eff6d25f5e71-rootfs\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:11:32.428269 master-0 kubenswrapper[7146]: I0318 13:11:32.428249 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c106be-27ea-4849-b365-eff6d25f5e71-proxy-tls\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:11:32.429006 master-0 kubenswrapper[7146]: I0318 13:11:32.428400 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hthf8\" (UniqueName: \"kubernetes.io/projected/f3c106be-27ea-4849-b365-eff6d25f5e71-kube-api-access-hthf8\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:11:32.530025 master-0 kubenswrapper[7146]: I0318 13:11:32.529846 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f3c106be-27ea-4849-b365-eff6d25f5e71-rootfs\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:11:32.530025 master-0 kubenswrapper[7146]: I0318 13:11:32.529894 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c106be-27ea-4849-b365-eff6d25f5e71-proxy-tls\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:11:32.530025 master-0 kubenswrapper[7146]: I0318 13:11:32.529917 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hthf8\" (UniqueName: \"kubernetes.io/projected/f3c106be-27ea-4849-b365-eff6d25f5e71-kube-api-access-hthf8\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:11:32.530025 master-0 kubenswrapper[7146]: I0318 13:11:32.529965 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c106be-27ea-4849-b365-eff6d25f5e71-mcd-auth-proxy-config\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:11:32.530630 master-0 kubenswrapper[7146]: I0318 13:11:32.530552 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f3c106be-27ea-4849-b365-eff6d25f5e71-rootfs\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:11:32.531109 master-0 kubenswrapper[7146]: I0318 13:11:32.531089 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c106be-27ea-4849-b365-eff6d25f5e71-mcd-auth-proxy-config\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:11:32.534339 master-0 kubenswrapper[7146]: I0318 13:11:32.534308 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c106be-27ea-4849-b365-eff6d25f5e71-proxy-tls\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:11:32.546604 master-0 kubenswrapper[7146]: I0318 13:11:32.546552 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hthf8\" (UniqueName: \"kubernetes.io/projected/f3c106be-27ea-4849-b365-eff6d25f5e71-kube-api-access-hthf8\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:11:32.701560 master-0 kubenswrapper[7146]: I0318 13:11:32.701466 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:11:33.641270 master-0 kubenswrapper[7146]: I0318 13:11:33.641216 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:11:33.677006 master-0 kubenswrapper[7146]: I0318 13:11:33.676763 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:11:34.987536 master-0 kubenswrapper[7146]: W0318 13:11:34.983847 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3c106be_27ea_4849_b365_eff6d25f5e71.slice/crio-b8a26d81f44c36716262661566f2f3e96301ba61c1175262d41d795c78a4ddc7 WatchSource:0}: Error finding container b8a26d81f44c36716262661566f2f3e96301ba61c1175262d41d795c78a4ddc7: Status 404 returned error can't find the container with id b8a26d81f44c36716262661566f2f3e96301ba61c1175262d41d795c78a4ddc7 Mar 18 13:11:35.031789 master-0 kubenswrapper[7146]: I0318 13:11:35.031742 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:11:35.164689 master-0 kubenswrapper[7146]: I0318 13:11:35.164640 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48t2p\" (UniqueName: \"kubernetes.io/projected/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-kube-api-access-48t2p\") pod \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " Mar 18 13:11:35.164912 master-0 kubenswrapper[7146]: I0318 13:11:35.164722 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-auth-proxy-config\") pod \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " Mar 18 13:11:35.164912 master-0 kubenswrapper[7146]: I0318 13:11:35.164762 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-images\") pod \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " Mar 18 13:11:35.164912 master-0 kubenswrapper[7146]: I0318 13:11:35.164787 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-host-etc-kube\") pod \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " Mar 18 13:11:35.164912 master-0 kubenswrapper[7146]: I0318 13:11:35.164808 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-cloud-controller-manager-operator-tls\") pod \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\" (UID: \"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f\") " Mar 18 13:11:35.164912 master-0 kubenswrapper[7146]: I0318 13:11:35.164897 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" (UID: "fcc459ef-4847-43bc-9f1b-e7bd1335dd8f"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:11:35.165144 master-0 kubenswrapper[7146]: I0318 13:11:35.165057 7146 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:35.166480 master-0 kubenswrapper[7146]: I0318 13:11:35.165444 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-images" (OuterVolumeSpecName: "images") pod "fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" (UID: "fcc459ef-4847-43bc-9f1b-e7bd1335dd8f"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:11:35.166480 master-0 kubenswrapper[7146]: I0318 13:11:35.165455 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" (UID: "fcc459ef-4847-43bc-9f1b-e7bd1335dd8f"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:11:35.168224 master-0 kubenswrapper[7146]: I0318 13:11:35.168110 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" (UID: "fcc459ef-4847-43bc-9f1b-e7bd1335dd8f"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:11:35.168454 master-0 kubenswrapper[7146]: I0318 13:11:35.168430 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-kube-api-access-48t2p" (OuterVolumeSpecName: "kube-api-access-48t2p") pod "fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" (UID: "fcc459ef-4847-43bc-9f1b-e7bd1335dd8f"). InnerVolumeSpecName "kube-api-access-48t2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:11:35.266795 master-0 kubenswrapper[7146]: I0318 13:11:35.266751 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48t2p\" (UniqueName: \"kubernetes.io/projected/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-kube-api-access-48t2p\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:35.266795 master-0 kubenswrapper[7146]: I0318 13:11:35.266797 7146 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:35.266997 master-0 kubenswrapper[7146]: I0318 13:11:35.266812 7146 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-images\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:35.266997 master-0 kubenswrapper[7146]: I0318 13:11:35.266832 7146 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:35.303612 master-0 kubenswrapper[7146]: I0318 13:11:35.303571 7146 generic.go:334] "Generic (PLEG): container finished" podID="fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" containerID="f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65" exitCode=0 Mar 18 13:11:35.303612 master-0 kubenswrapper[7146]: I0318 13:11:35.303598 7146 generic.go:334] "Generic (PLEG): container finished" podID="fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" containerID="bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c" exitCode=0 Mar 18 13:11:35.303612 master-0 kubenswrapper[7146]: I0318 13:11:35.303606 7146 generic.go:334] "Generic (PLEG): container finished" podID="fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" containerID="2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e" exitCode=0 Mar 18 13:11:35.303918 master-0 kubenswrapper[7146]: I0318 13:11:35.303647 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" event={"ID":"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f","Type":"ContainerDied","Data":"f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65"} Mar 18 13:11:35.303918 master-0 kubenswrapper[7146]: I0318 13:11:35.303673 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" event={"ID":"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f","Type":"ContainerDied","Data":"bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c"} Mar 18 13:11:35.303918 master-0 kubenswrapper[7146]: I0318 13:11:35.303685 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" event={"ID":"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f","Type":"ContainerDied","Data":"2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e"} Mar 18 13:11:35.303918 master-0 kubenswrapper[7146]: I0318 13:11:35.303696 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" event={"ID":"fcc459ef-4847-43bc-9f1b-e7bd1335dd8f","Type":"ContainerDied","Data":"d6aa70225229cd8d076f3c277c4695c96efe66c480ed25e342a99d26cce5aa22"} Mar 18 13:11:35.303918 master-0 kubenswrapper[7146]: I0318 13:11:35.303711 7146 scope.go:117] "RemoveContainer" containerID="f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65" Mar 18 13:11:35.303918 master-0 kubenswrapper[7146]: I0318 13:11:35.303821 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj" Mar 18 13:11:35.320738 master-0 kubenswrapper[7146]: I0318 13:11:35.320649 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" event={"ID":"f3c106be-27ea-4849-b365-eff6d25f5e71","Type":"ContainerStarted","Data":"b8a26d81f44c36716262661566f2f3e96301ba61c1175262d41d795c78a4ddc7"} Mar 18 13:11:35.321817 master-0 kubenswrapper[7146]: I0318 13:11:35.321762 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" event={"ID":"c074751c-6b3c-44df-aca5-42fa69662779","Type":"ContainerStarted","Data":"3218671e0d8f8c1591a8a17593f0d0f416cb65c76acbdb06463c8883ca515189"} Mar 18 13:11:35.349012 master-0 kubenswrapper[7146]: I0318 13:11:35.348523 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" podStartSLOduration=1.964234115 podStartE2EDuration="11.348503361s" podCreationTimestamp="2026-03-18 13:11:24 +0000 UTC" firstStartedPulling="2026-03-18 13:11:25.593590947 +0000 UTC m=+194.401808308" lastFinishedPulling="2026-03-18 13:11:34.977860183 +0000 UTC m=+203.786077554" observedRunningTime="2026-03-18 13:11:35.347583944 +0000 UTC m=+204.155801315" watchObservedRunningTime="2026-03-18 13:11:35.348503361 +0000 UTC m=+204.156720732" Mar 18 13:11:35.356570 master-0 kubenswrapper[7146]: I0318 13:11:35.356524 7146 scope.go:117] "RemoveContainer" containerID="bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c" Mar 18 13:11:35.396678 master-0 kubenswrapper[7146]: I0318 13:11:35.393190 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj"] Mar 18 13:11:35.403128 master-0 kubenswrapper[7146]: I0318 13:11:35.402960 7146 scope.go:117] "RemoveContainer" containerID="2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e" Mar 18 13:11:35.411992 master-0 kubenswrapper[7146]: I0318 13:11:35.411929 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-dnztj"] Mar 18 13:11:35.424266 master-0 kubenswrapper[7146]: I0318 13:11:35.424235 7146 scope.go:117] "RemoveContainer" containerID="f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65" Mar 18 13:11:35.424739 master-0 kubenswrapper[7146]: E0318 13:11:35.424585 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65\": container with ID starting with f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65 not found: ID does not exist" containerID="f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65" Mar 18 13:11:35.424739 master-0 kubenswrapper[7146]: I0318 13:11:35.424613 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65"} err="failed to get container status \"f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65\": rpc error: code = NotFound desc = could not find container \"f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65\": container with ID starting with f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65 not found: ID does not exist" Mar 18 13:11:35.424739 master-0 kubenswrapper[7146]: I0318 13:11:35.424634 7146 scope.go:117] "RemoveContainer" containerID="bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c" Mar 18 13:11:35.426374 master-0 kubenswrapper[7146]: E0318 13:11:35.426332 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c\": container with ID starting with bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c not found: ID does not exist" containerID="bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c" Mar 18 13:11:35.426449 master-0 kubenswrapper[7146]: I0318 13:11:35.426409 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c"} err="failed to get container status \"bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c\": rpc error: code = NotFound desc = could not find container \"bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c\": container with ID starting with bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c not found: ID does not exist" Mar 18 13:11:35.426449 master-0 kubenswrapper[7146]: I0318 13:11:35.426426 7146 scope.go:117] "RemoveContainer" containerID="2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e" Mar 18 13:11:35.426774 master-0 kubenswrapper[7146]: E0318 13:11:35.426747 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e\": container with ID starting with 2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e not found: ID does not exist" containerID="2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e" Mar 18 13:11:35.426774 master-0 kubenswrapper[7146]: I0318 13:11:35.426769 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e"} err="failed to get container status \"2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e\": rpc error: code = NotFound desc = could not find container \"2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e\": container with ID starting with 2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e not found: ID does not exist" Mar 18 13:11:35.426845 master-0 kubenswrapper[7146]: I0318 13:11:35.426781 7146 scope.go:117] "RemoveContainer" containerID="f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65" Mar 18 13:11:35.427426 master-0 kubenswrapper[7146]: I0318 13:11:35.427394 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65"} err="failed to get container status \"f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65\": rpc error: code = NotFound desc = could not find container \"f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65\": container with ID starting with f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65 not found: ID does not exist" Mar 18 13:11:35.427426 master-0 kubenswrapper[7146]: I0318 13:11:35.427418 7146 scope.go:117] "RemoveContainer" containerID="bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c" Mar 18 13:11:35.427669 master-0 kubenswrapper[7146]: I0318 13:11:35.427619 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c"} err="failed to get container status \"bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c\": rpc error: code = NotFound desc = could not find container \"bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c\": container with ID starting with bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c not found: ID does not exist" Mar 18 13:11:35.427669 master-0 kubenswrapper[7146]: I0318 13:11:35.427639 7146 scope.go:117] "RemoveContainer" containerID="2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e" Mar 18 13:11:35.427930 master-0 kubenswrapper[7146]: I0318 13:11:35.427902 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e"} err="failed to get container status \"2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e\": rpc error: code = NotFound desc = could not find container \"2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e\": container with ID starting with 2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e not found: ID does not exist" Mar 18 13:11:35.427930 master-0 kubenswrapper[7146]: I0318 13:11:35.427922 7146 scope.go:117] "RemoveContainer" containerID="f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65" Mar 18 13:11:35.428205 master-0 kubenswrapper[7146]: I0318 13:11:35.428172 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65"} err="failed to get container status \"f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65\": rpc error: code = NotFound desc = could not find container \"f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65\": container with ID starting with f2683b55176329d1919089ad2d6070fd159596b261fbcae6ee0461c536368e65 not found: ID does not exist" Mar 18 13:11:35.428205 master-0 kubenswrapper[7146]: I0318 13:11:35.428197 7146 scope.go:117] "RemoveContainer" containerID="bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c" Mar 18 13:11:35.428457 master-0 kubenswrapper[7146]: I0318 13:11:35.428431 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c"} err="failed to get container status \"bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c\": rpc error: code = NotFound desc = could not find container \"bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c\": container with ID starting with bce5b6f24d006c2e2d8d8ea6cc529eddc2c435bcd8ce5ebb4376a2527c82791c not found: ID does not exist" Mar 18 13:11:35.428496 master-0 kubenswrapper[7146]: I0318 13:11:35.428460 7146 scope.go:117] "RemoveContainer" containerID="2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e" Mar 18 13:11:35.428717 master-0 kubenswrapper[7146]: I0318 13:11:35.428684 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e"} err="failed to get container status \"2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e\": rpc error: code = NotFound desc = could not find container \"2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e\": container with ID starting with 2277f4c8182dc4d77fad8c92de899a18b660c6c886d5078feec2f679e9395f2e not found: ID does not exist" Mar 18 13:11:35.463678 master-0 kubenswrapper[7146]: I0318 13:11:35.463606 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh"] Mar 18 13:11:35.466955 master-0 kubenswrapper[7146]: E0318 13:11:35.463921 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" containerName="config-sync-controllers" Mar 18 13:11:35.466955 master-0 kubenswrapper[7146]: I0318 13:11:35.463959 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" containerName="config-sync-controllers" Mar 18 13:11:35.466955 master-0 kubenswrapper[7146]: E0318 13:11:35.463974 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" containerName="cluster-cloud-controller-manager" Mar 18 13:11:35.466955 master-0 kubenswrapper[7146]: I0318 13:11:35.463982 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" containerName="cluster-cloud-controller-manager" Mar 18 13:11:35.466955 master-0 kubenswrapper[7146]: E0318 13:11:35.464012 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" containerName="kube-rbac-proxy" Mar 18 13:11:35.466955 master-0 kubenswrapper[7146]: I0318 13:11:35.464019 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" containerName="kube-rbac-proxy" Mar 18 13:11:35.466955 master-0 kubenswrapper[7146]: I0318 13:11:35.464134 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" containerName="cluster-cloud-controller-manager" Mar 18 13:11:35.466955 master-0 kubenswrapper[7146]: I0318 13:11:35.464152 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" containerName="config-sync-controllers" Mar 18 13:11:35.466955 master-0 kubenswrapper[7146]: I0318 13:11:35.464164 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" containerName="kube-rbac-proxy" Mar 18 13:11:35.466955 master-0 kubenswrapper[7146]: I0318 13:11:35.465071 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:11:35.471365 master-0 kubenswrapper[7146]: I0318 13:11:35.471091 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 13:11:35.471365 master-0 kubenswrapper[7146]: I0318 13:11:35.471319 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 13:11:35.471525 master-0 kubenswrapper[7146]: I0318 13:11:35.471505 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 13:11:35.471612 master-0 kubenswrapper[7146]: I0318 13:11:35.471499 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:11:35.471783 master-0 kubenswrapper[7146]: I0318 13:11:35.471736 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lsr6r" Mar 18 13:11:35.471827 master-0 kubenswrapper[7146]: I0318 13:11:35.471718 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:11:35.572361 master-0 kubenswrapper[7146]: I0318 13:11:35.572300 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:11:35.572361 master-0 kubenswrapper[7146]: I0318 13:11:35.572371 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:11:35.572654 master-0 kubenswrapper[7146]: I0318 13:11:35.572394 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4djxt\" (UniqueName: \"kubernetes.io/projected/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-kube-api-access-4djxt\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:11:35.572654 master-0 kubenswrapper[7146]: I0318 13:11:35.572434 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:11:35.572654 master-0 kubenswrapper[7146]: I0318 13:11:35.572467 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:11:35.674051 master-0 kubenswrapper[7146]: I0318 13:11:35.673905 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:11:35.674051 master-0 kubenswrapper[7146]: I0318 13:11:35.674012 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:11:35.674051 master-0 kubenswrapper[7146]: I0318 13:11:35.674034 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4djxt\" (UniqueName: \"kubernetes.io/projected/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-kube-api-access-4djxt\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:11:35.674263 master-0 kubenswrapper[7146]: I0318 13:11:35.674071 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:11:35.674263 master-0 kubenswrapper[7146]: I0318 13:11:35.674104 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:11:35.681714 master-0 kubenswrapper[7146]: I0318 13:11:35.680630 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:11:35.681714 master-0 kubenswrapper[7146]: I0318 13:11:35.681480 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:11:35.681714 master-0 kubenswrapper[7146]: I0318 13:11:35.681540 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:11:35.681999 master-0 kubenswrapper[7146]: I0318 13:11:35.681724 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:11:35.703962 master-0 kubenswrapper[7146]: I0318 13:11:35.699167 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4djxt\" (UniqueName: \"kubernetes.io/projected/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-kube-api-access-4djxt\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:11:35.798982 master-0 kubenswrapper[7146]: I0318 13:11:35.796497 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:11:36.379044 master-0 kubenswrapper[7146]: I0318 13:11:36.378979 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" event={"ID":"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5","Type":"ContainerStarted","Data":"59e43a5798785560fb9b5499b32da91edb8ae46a4589c047f8415fd258612a45"} Mar 18 13:11:36.379044 master-0 kubenswrapper[7146]: I0318 13:11:36.379035 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" event={"ID":"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5","Type":"ContainerStarted","Data":"6f72f96fe981864c7efed48f7ec73353e9a984bf6f9e3b23eec1a4ed414c6dbd"} Mar 18 13:11:36.386805 master-0 kubenswrapper[7146]: I0318 13:11:36.386757 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" event={"ID":"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a","Type":"ContainerStarted","Data":"35a6e219e9c2c306481d98d16c4ce589a46a92dae3b8a5616cb81c85790b7339"} Mar 18 13:11:36.402962 master-0 kubenswrapper[7146]: I0318 13:11:36.399978 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" event={"ID":"f3c106be-27ea-4849-b365-eff6d25f5e71","Type":"ContainerStarted","Data":"17fa8bc42ca380ea4f053437efc8de8bb77520fbe392cd01a232a3d4864aab3c"} Mar 18 13:11:36.402962 master-0 kubenswrapper[7146]: I0318 13:11:36.400225 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" event={"ID":"f3c106be-27ea-4849-b365-eff6d25f5e71","Type":"ContainerStarted","Data":"c90f914a288483a0a45e3a60213f00c71029e5d596c05961e9490fd5ab4b5806"} Mar 18 13:11:36.526889 master-0 kubenswrapper[7146]: I0318 13:11:36.526797 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" podStartSLOduration=3.016967619 podStartE2EDuration="12.526772778s" podCreationTimestamp="2026-03-18 13:11:24 +0000 UTC" firstStartedPulling="2026-03-18 13:11:25.770262476 +0000 UTC m=+194.578479837" lastFinishedPulling="2026-03-18 13:11:35.280067635 +0000 UTC m=+204.088284996" observedRunningTime="2026-03-18 13:11:36.525538973 +0000 UTC m=+205.333756334" watchObservedRunningTime="2026-03-18 13:11:36.526772778 +0000 UTC m=+205.334990159" Mar 18 13:11:37.367591 master-0 kubenswrapper[7146]: I0318 13:11:37.367530 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc459ef-4847-43bc-9f1b-e7bd1335dd8f" path="/var/lib/kubelet/pods/fcc459ef-4847-43bc-9f1b-e7bd1335dd8f/volumes" Mar 18 13:11:37.408593 master-0 kubenswrapper[7146]: I0318 13:11:37.408552 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" event={"ID":"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5","Type":"ContainerStarted","Data":"9ea155b6a2a2c7a73a232b5aea75777c49f53ad64322a3ebea13586eab2c7ec1"} Mar 18 13:11:37.408593 master-0 kubenswrapper[7146]: I0318 13:11:37.408590 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" event={"ID":"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5","Type":"ContainerStarted","Data":"416f123fbbc7d637d66d383e9de461fd5b529d5d437df7cc58e7901b8e2c57aa"} Mar 18 13:11:37.641445 master-0 kubenswrapper[7146]: I0318 13:11:37.641305 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" podStartSLOduration=2.641288396 podStartE2EDuration="2.641288396s" podCreationTimestamp="2026-03-18 13:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:11:37.639850835 +0000 UTC m=+206.448068196" watchObservedRunningTime="2026-03-18 13:11:37.641288396 +0000 UTC m=+206.449505757" Mar 18 13:11:37.643179 master-0 kubenswrapper[7146]: I0318 13:11:37.643150 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" podStartSLOduration=5.64314439 podStartE2EDuration="5.64314439s" podCreationTimestamp="2026-03-18 13:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:11:36.612912664 +0000 UTC m=+205.421130025" watchObservedRunningTime="2026-03-18 13:11:37.64314439 +0000 UTC m=+206.451361751" Mar 18 13:11:42.306794 master-0 kubenswrapper[7146]: I0318 13:11:42.306740 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s"] Mar 18 13:11:42.307841 master-0 kubenswrapper[7146]: I0318 13:11:42.307784 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:11:42.310090 master-0 kubenswrapper[7146]: I0318 13:11:42.310061 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-rxnwp" Mar 18 13:11:42.310608 master-0 kubenswrapper[7146]: I0318 13:11:42.310558 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 13:11:42.332037 master-0 kubenswrapper[7146]: I0318 13:11:42.331985 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s"] Mar 18 13:11:42.379319 master-0 kubenswrapper[7146]: I0318 13:11:42.379261 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/17adbc1a-f29c-4278-b29a-0cc3879b753f-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-qpp2s\" (UID: \"17adbc1a-f29c-4278-b29a-0cc3879b753f\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:11:42.379576 master-0 kubenswrapper[7146]: I0318 13:11:42.379490 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/17adbc1a-f29c-4278-b29a-0cc3879b753f-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-qpp2s\" (UID: \"17adbc1a-f29c-4278-b29a-0cc3879b753f\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:11:42.379635 master-0 kubenswrapper[7146]: I0318 13:11:42.379607 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6sr4\" (UniqueName: \"kubernetes.io/projected/17adbc1a-f29c-4278-b29a-0cc3879b753f-kube-api-access-v6sr4\") pod \"machine-config-controller-b4f87c5b9-qpp2s\" (UID: \"17adbc1a-f29c-4278-b29a-0cc3879b753f\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:11:42.481497 master-0 kubenswrapper[7146]: I0318 13:11:42.481437 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6sr4\" (UniqueName: \"kubernetes.io/projected/17adbc1a-f29c-4278-b29a-0cc3879b753f-kube-api-access-v6sr4\") pod \"machine-config-controller-b4f87c5b9-qpp2s\" (UID: \"17adbc1a-f29c-4278-b29a-0cc3879b753f\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:11:42.481717 master-0 kubenswrapper[7146]: I0318 13:11:42.481565 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/17adbc1a-f29c-4278-b29a-0cc3879b753f-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-qpp2s\" (UID: \"17adbc1a-f29c-4278-b29a-0cc3879b753f\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:11:42.481717 master-0 kubenswrapper[7146]: I0318 13:11:42.481590 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/17adbc1a-f29c-4278-b29a-0cc3879b753f-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-qpp2s\" (UID: \"17adbc1a-f29c-4278-b29a-0cc3879b753f\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:11:42.482981 master-0 kubenswrapper[7146]: I0318 13:11:42.482911 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/17adbc1a-f29c-4278-b29a-0cc3879b753f-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-qpp2s\" (UID: \"17adbc1a-f29c-4278-b29a-0cc3879b753f\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:11:42.484965 master-0 kubenswrapper[7146]: I0318 13:11:42.484925 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/17adbc1a-f29c-4278-b29a-0cc3879b753f-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-qpp2s\" (UID: \"17adbc1a-f29c-4278-b29a-0cc3879b753f\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:11:42.499231 master-0 kubenswrapper[7146]: I0318 13:11:42.499190 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6sr4\" (UniqueName: \"kubernetes.io/projected/17adbc1a-f29c-4278-b29a-0cc3879b753f-kube-api-access-v6sr4\") pod \"machine-config-controller-b4f87c5b9-qpp2s\" (UID: \"17adbc1a-f29c-4278-b29a-0cc3879b753f\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:11:42.626991 master-0 kubenswrapper[7146]: I0318 13:11:42.626821 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:11:43.267569 master-0 kubenswrapper[7146]: I0318 13:11:43.267527 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s"] Mar 18 13:11:43.273995 master-0 kubenswrapper[7146]: W0318 13:11:43.273955 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17adbc1a_f29c_4278_b29a_0cc3879b753f.slice/crio-f83db5e28df12811765a35c5caa63df9e480be2ff8b0922b566cffc66ed3f105 WatchSource:0}: Error finding container f83db5e28df12811765a35c5caa63df9e480be2ff8b0922b566cffc66ed3f105: Status 404 returned error can't find the container with id f83db5e28df12811765a35c5caa63df9e480be2ff8b0922b566cffc66ed3f105 Mar 18 13:11:43.308135 master-0 kubenswrapper[7146]: I0318 13:11:43.308094 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-mqh5c_8ce8e99d-7b02-4bf4-a438-adde851918cb/authentication-operator/0.log" Mar 18 13:11:43.457951 master-0 kubenswrapper[7146]: I0318 13:11:43.457891 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" event={"ID":"17adbc1a-f29c-4278-b29a-0cc3879b753f","Type":"ContainerStarted","Data":"f83db5e28df12811765a35c5caa63df9e480be2ff8b0922b566cffc66ed3f105"} Mar 18 13:11:43.511973 master-0 kubenswrapper[7146]: I0318 13:11:43.511920 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-mqh5c_8ce8e99d-7b02-4bf4-a438-adde851918cb/authentication-operator/1.log" Mar 18 13:11:43.903905 master-0 kubenswrapper[7146]: I0318 13:11:43.903861 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-7d95bbc4f4-4ch22_9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a/fix-audit-permissions/0.log" Mar 18 13:11:43.961722 master-0 kubenswrapper[7146]: I0318 13:11:43.961657 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-qnwtb"] Mar 18 13:11:43.962484 master-0 kubenswrapper[7146]: I0318 13:11:43.962436 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-qnwtb" Mar 18 13:11:43.970430 master-0 kubenswrapper[7146]: I0318 13:11:43.970388 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-7dcf5569b5-mtnzv"] Mar 18 13:11:43.971700 master-0 kubenswrapper[7146]: I0318 13:11:43.971673 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:43.973089 master-0 kubenswrapper[7146]: I0318 13:11:43.973054 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm"] Mar 18 13:11:43.973577 master-0 kubenswrapper[7146]: I0318 13:11:43.973550 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm" Mar 18 13:11:43.973962 master-0 kubenswrapper[7146]: I0318 13:11:43.973900 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 13:11:43.974450 master-0 kubenswrapper[7146]: I0318 13:11:43.974398 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 13:11:43.975411 master-0 kubenswrapper[7146]: I0318 13:11:43.975380 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 13:11:43.975490 master-0 kubenswrapper[7146]: I0318 13:11:43.975381 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 13:11:43.975634 master-0 kubenswrapper[7146]: I0318 13:11:43.975577 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 13:11:43.975819 master-0 kubenswrapper[7146]: I0318 13:11:43.975789 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 13:11:43.976836 master-0 kubenswrapper[7146]: I0318 13:11:43.976806 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 13:11:43.992247 master-0 kubenswrapper[7146]: I0318 13:11:43.992193 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-qnwtb"] Mar 18 13:11:44.020879 master-0 kubenswrapper[7146]: I0318 13:11:44.020827 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm"] Mar 18 13:11:44.100684 master-0 kubenswrapper[7146]: I0318 13:11:44.100621 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6bfw\" (UniqueName: \"kubernetes.io/projected/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-kube-api-access-w6bfw\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:44.100879 master-0 kubenswrapper[7146]: I0318 13:11:44.100735 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqxgz\" (UniqueName: \"kubernetes.io/projected/ebe459df-4be3-4a73-a061-5d2c637f57be-kube-api-access-fqxgz\") pod \"network-check-source-b4bf74f6-qnwtb\" (UID: \"ebe459df-4be3-4a73-a061-5d2c637f57be\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-qnwtb" Mar 18 13:11:44.100879 master-0 kubenswrapper[7146]: I0318 13:11:44.100796 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-stats-auth\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:44.100879 master-0 kubenswrapper[7146]: I0318 13:11:44.100842 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-default-certificate\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:44.101059 master-0 kubenswrapper[7146]: I0318 13:11:44.100900 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-metrics-certs\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:44.101059 master-0 kubenswrapper[7146]: I0318 13:11:44.101023 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-service-ca-bundle\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:44.101249 master-0 kubenswrapper[7146]: I0318 13:11:44.101210 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/92e396cd-a0d9-4b6b-9d82-add1ce2a8712-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-8qhwm\" (UID: \"92e396cd-a0d9-4b6b-9d82-add1ce2a8712\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm" Mar 18 13:11:44.104276 master-0 kubenswrapper[7146]: I0318 13:11:44.104242 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-7d95bbc4f4-4ch22_9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a/oauth-apiserver/0.log" Mar 18 13:11:44.203099 master-0 kubenswrapper[7146]: I0318 13:11:44.202990 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6bfw\" (UniqueName: \"kubernetes.io/projected/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-kube-api-access-w6bfw\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:44.203099 master-0 kubenswrapper[7146]: I0318 13:11:44.203083 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqxgz\" (UniqueName: \"kubernetes.io/projected/ebe459df-4be3-4a73-a061-5d2c637f57be-kube-api-access-fqxgz\") pod \"network-check-source-b4bf74f6-qnwtb\" (UID: \"ebe459df-4be3-4a73-a061-5d2c637f57be\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-qnwtb" Mar 18 13:11:44.203099 master-0 kubenswrapper[7146]: I0318 13:11:44.203106 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-stats-auth\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:44.203525 master-0 kubenswrapper[7146]: I0318 13:11:44.203125 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-default-certificate\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:44.203525 master-0 kubenswrapper[7146]: I0318 13:11:44.203148 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-metrics-certs\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:44.203525 master-0 kubenswrapper[7146]: I0318 13:11:44.203181 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-service-ca-bundle\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:44.203525 master-0 kubenswrapper[7146]: I0318 13:11:44.203504 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/92e396cd-a0d9-4b6b-9d82-add1ce2a8712-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-8qhwm\" (UID: \"92e396cd-a0d9-4b6b-9d82-add1ce2a8712\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm" Mar 18 13:11:44.204457 master-0 kubenswrapper[7146]: I0318 13:11:44.204420 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-service-ca-bundle\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:44.207381 master-0 kubenswrapper[7146]: I0318 13:11:44.207296 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/92e396cd-a0d9-4b6b-9d82-add1ce2a8712-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-8qhwm\" (UID: \"92e396cd-a0d9-4b6b-9d82-add1ce2a8712\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm" Mar 18 13:11:44.207381 master-0 kubenswrapper[7146]: I0318 13:11:44.207365 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-default-certificate\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:44.211610 master-0 kubenswrapper[7146]: I0318 13:11:44.211572 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-stats-auth\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:44.216650 master-0 kubenswrapper[7146]: I0318 13:11:44.216577 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-metrics-certs\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:44.222283 master-0 kubenswrapper[7146]: I0318 13:11:44.222212 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6bfw\" (UniqueName: \"kubernetes.io/projected/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-kube-api-access-w6bfw\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:44.230560 master-0 kubenswrapper[7146]: I0318 13:11:44.230515 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqxgz\" (UniqueName: \"kubernetes.io/projected/ebe459df-4be3-4a73-a061-5d2c637f57be-kube-api-access-fqxgz\") pod \"network-check-source-b4bf74f6-qnwtb\" (UID: \"ebe459df-4be3-4a73-a061-5d2c637f57be\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-qnwtb" Mar 18 13:11:44.289764 master-0 kubenswrapper[7146]: I0318 13:11:44.289727 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-hmbpl_1bf0ea4e-8b08-488f-b252-39580f46b756/etcd-operator/0.log" Mar 18 13:11:44.301072 master-0 kubenswrapper[7146]: I0318 13:11:44.301022 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-qnwtb" Mar 18 13:11:44.324925 master-0 kubenswrapper[7146]: I0318 13:11:44.324877 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:44.343357 master-0 kubenswrapper[7146]: I0318 13:11:44.343302 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm" Mar 18 13:11:44.478961 master-0 kubenswrapper[7146]: I0318 13:11:44.477055 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" event={"ID":"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf","Type":"ContainerStarted","Data":"6f92bee18602c78e97abff330426051be6816bfa6a663d5ddee07fcf7b81c8a2"} Mar 18 13:11:44.482573 master-0 kubenswrapper[7146]: I0318 13:11:44.482528 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" event={"ID":"17adbc1a-f29c-4278-b29a-0cc3879b753f","Type":"ContainerStarted","Data":"81d14c599258dcda6adba938c38875362cd3dfcc6e30d2c979edd532468f934a"} Mar 18 13:11:44.482650 master-0 kubenswrapper[7146]: I0318 13:11:44.482588 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" event={"ID":"17adbc1a-f29c-4278-b29a-0cc3879b753f","Type":"ContainerStarted","Data":"ea098486f4dc00d516848689091052951444062d9e2ae5ef81e67aadee11ef6e"} Mar 18 13:11:44.554698 master-0 kubenswrapper[7146]: I0318 13:11:44.553039 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-hmbpl_1bf0ea4e-8b08-488f-b252-39580f46b756/etcd-operator/1.log" Mar 18 13:11:44.672397 master-0 kubenswrapper[7146]: I0318 13:11:44.672320 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" podStartSLOduration=2.672303932 podStartE2EDuration="2.672303932s" podCreationTimestamp="2026-03-18 13:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:11:44.667984017 +0000 UTC m=+213.476201378" watchObservedRunningTime="2026-03-18 13:11:44.672303932 +0000 UTC m=+213.480521303" Mar 18 13:11:44.814492 master-0 kubenswrapper[7146]: I0318 13:11:44.814372 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/setup/0.log" Mar 18 13:11:44.889669 master-0 kubenswrapper[7146]: I0318 13:11:44.889058 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-ensure-env-vars/0.log" Mar 18 13:11:44.891354 master-0 kubenswrapper[7146]: I0318 13:11:44.891319 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm"] Mar 18 13:11:44.894470 master-0 kubenswrapper[7146]: I0318 13:11:44.894408 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-qnwtb"] Mar 18 13:11:44.896133 master-0 kubenswrapper[7146]: W0318 13:11:44.895150 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebe459df_4be3_4a73_a061_5d2c637f57be.slice/crio-27fd370e185ff896bf0edc768c087bcfed286fcd2920b469bb1b45967f2d7e8e WatchSource:0}: Error finding container 27fd370e185ff896bf0edc768c087bcfed286fcd2920b469bb1b45967f2d7e8e: Status 404 returned error can't find the container with id 27fd370e185ff896bf0edc768c087bcfed286fcd2920b469bb1b45967f2d7e8e Mar 18 13:11:44.903137 master-0 kubenswrapper[7146]: W0318 13:11:44.902335 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92e396cd_a0d9_4b6b_9d82_add1ce2a8712.slice/crio-f46be36805215ed01ce43e16395e2577e1a093a401197fa4f4e250af1a9fdef6 WatchSource:0}: Error finding container f46be36805215ed01ce43e16395e2577e1a093a401197fa4f4e250af1a9fdef6: Status 404 returned error can't find the container with id f46be36805215ed01ce43e16395e2577e1a093a401197fa4f4e250af1a9fdef6 Mar 18 13:11:45.110291 master-0 kubenswrapper[7146]: I0318 13:11:45.110222 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-resources-copy/0.log" Mar 18 13:11:45.298474 master-0 kubenswrapper[7146]: I0318 13:11:45.298421 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 13:11:45.370946 master-0 kubenswrapper[7146]: I0318 13:11:45.370850 7146 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 13:11:45.487966 master-0 kubenswrapper[7146]: I0318 13:11:45.487643 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 18 13:11:45.493959 master-0 kubenswrapper[7146]: I0318 13:11:45.491167 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-qnwtb" event={"ID":"ebe459df-4be3-4a73-a061-5d2c637f57be","Type":"ContainerStarted","Data":"51480026893ef548fc035d065a817f4b14a0a1ffa4e617ad7c338e1f1fa26122"} Mar 18 13:11:45.493959 master-0 kubenswrapper[7146]: I0318 13:11:45.491287 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-qnwtb" event={"ID":"ebe459df-4be3-4a73-a061-5d2c637f57be","Type":"ContainerStarted","Data":"27fd370e185ff896bf0edc768c087bcfed286fcd2920b469bb1b45967f2d7e8e"} Mar 18 13:11:45.494354 master-0 kubenswrapper[7146]: I0318 13:11:45.494322 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm" event={"ID":"92e396cd-a0d9-4b6b-9d82-add1ce2a8712","Type":"ContainerStarted","Data":"f46be36805215ed01ce43e16395e2577e1a093a401197fa4f4e250af1a9fdef6"} Mar 18 13:11:45.534852 master-0 kubenswrapper[7146]: I0318 13:11:45.534766 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-qnwtb" podStartSLOduration=286.534748555 podStartE2EDuration="4m46.534748555s" podCreationTimestamp="2026-03-18 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:11:45.525704824 +0000 UTC m=+214.333922185" watchObservedRunningTime="2026-03-18 13:11:45.534748555 +0000 UTC m=+214.342965916" Mar 18 13:11:45.723010 master-0 kubenswrapper[7146]: I0318 13:11:45.722904 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 13:11:45.886979 master-0 kubenswrapper[7146]: I0318 13:11:45.886174 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-readyz/0.log" Mar 18 13:11:46.089866 master-0 kubenswrapper[7146]: I0318 13:11:46.088818 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 13:11:46.288791 master-0 kubenswrapper[7146]: I0318 13:11:46.288736 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_f32b4d4d-df54-4fa7-a940-297e064fea44/installer/0.log" Mar 18 13:11:46.489166 master-0 kubenswrapper[7146]: I0318 13:11:46.489124 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-5zbrg_c2c4572e-0b38-4db1-96e5-6a35e29048e7/kube-apiserver-operator/0.log" Mar 18 13:11:46.693440 master-0 kubenswrapper[7146]: I0318 13:11:46.692654 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-5zbrg_c2c4572e-0b38-4db1-96e5-6a35e29048e7/kube-apiserver-operator/1.log" Mar 18 13:11:46.886783 master-0 kubenswrapper[7146]: I0318 13:11:46.886573 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/setup/0.log" Mar 18 13:11:47.092324 master-0 kubenswrapper[7146]: I0318 13:11:47.092109 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver/0.log" Mar 18 13:11:47.285551 master-0 kubenswrapper[7146]: I0318 13:11:47.285485 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver-insecure-readyz/0.log" Mar 18 13:11:47.488920 master-0 kubenswrapper[7146]: I0318 13:11:47.488870 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_88cd8323-8857-41fe-85d4-e6064330ec71/installer/0.log" Mar 18 13:11:47.509837 master-0 kubenswrapper[7146]: I0318 13:11:47.509797 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm" event={"ID":"92e396cd-a0d9-4b6b-9d82-add1ce2a8712","Type":"ContainerStarted","Data":"feb42a254e514ad5bcda2efba376372670973f2a85de2caa609901a9113d6c76"} Mar 18 13:11:47.510479 master-0 kubenswrapper[7146]: I0318 13:11:47.510462 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm" Mar 18 13:11:47.511510 master-0 kubenswrapper[7146]: I0318 13:11:47.511469 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" event={"ID":"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf","Type":"ContainerStarted","Data":"e11b0d0a2f2fcef8559280a2714debec3210ea7873ccaa447460e5bbe4ca1669"} Mar 18 13:11:47.516617 master-0 kubenswrapper[7146]: I0318 13:11:47.516586 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm" Mar 18 13:11:47.549919 master-0 kubenswrapper[7146]: I0318 13:11:47.549658 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm" podStartSLOduration=162.445499139 podStartE2EDuration="2m44.54963931s" podCreationTimestamp="2026-03-18 13:09:03 +0000 UTC" firstStartedPulling="2026-03-18 13:11:44.905606606 +0000 UTC m=+213.713823977" lastFinishedPulling="2026-03-18 13:11:47.009746787 +0000 UTC m=+215.817964148" observedRunningTime="2026-03-18 13:11:47.526216824 +0000 UTC m=+216.334434195" watchObservedRunningTime="2026-03-18 13:11:47.54963931 +0000 UTC m=+216.357856671" Mar 18 13:11:47.563279 master-0 kubenswrapper[7146]: I0318 13:11:47.563204 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podStartSLOduration=176.909438767 podStartE2EDuration="2m59.563181061s" podCreationTimestamp="2026-03-18 13:08:48 +0000 UTC" firstStartedPulling="2026-03-18 13:11:44.354323614 +0000 UTC m=+213.162540985" lastFinishedPulling="2026-03-18 13:11:47.008065918 +0000 UTC m=+215.816283279" observedRunningTime="2026-03-18 13:11:47.549282629 +0000 UTC m=+216.357500000" watchObservedRunningTime="2026-03-18 13:11:47.563181061 +0000 UTC m=+216.371398432" Mar 18 13:11:47.693327 master-0 kubenswrapper[7146]: I0318 13:11:47.691500 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_f4d88fc1-4e92-432e-ac2c-e1c489b15e93/installer/0.log" Mar 18 13:11:47.707683 master-0 kubenswrapper[7146]: I0318 13:11:47.705472 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2"] Mar 18 13:11:47.710150 master-0 kubenswrapper[7146]: I0318 13:11:47.710114 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:11:47.714280 master-0 kubenswrapper[7146]: I0318 13:11:47.714240 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 13:11:47.714280 master-0 kubenswrapper[7146]: I0318 13:11:47.714273 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 13:11:47.714503 master-0 kubenswrapper[7146]: I0318 13:11:47.714461 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-j868d" Mar 18 13:11:47.714758 master-0 kubenswrapper[7146]: I0318 13:11:47.714729 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 13:11:47.725761 master-0 kubenswrapper[7146]: I0318 13:11:47.725696 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2"] Mar 18 13:11:47.776621 master-0 kubenswrapper[7146]: I0318 13:11:47.776472 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-4f5s4"] Mar 18 13:11:47.777734 master-0 kubenswrapper[7146]: I0318 13:11:47.777698 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:11:47.780951 master-0 kubenswrapper[7146]: I0318 13:11:47.780865 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-nq9c8" Mar 18 13:11:47.780951 master-0 kubenswrapper[7146]: I0318 13:11:47.780871 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 13:11:47.782243 master-0 kubenswrapper[7146]: I0318 13:11:47.782218 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 13:11:47.868183 master-0 kubenswrapper[7146]: I0318 13:11:47.868109 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5a715e53-1874-4993-93d1-504c3470a6f5-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:11:47.868183 master-0 kubenswrapper[7146]: I0318 13:11:47.868182 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:11:47.868490 master-0 kubenswrapper[7146]: I0318 13:11:47.868262 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:11:47.868490 master-0 kubenswrapper[7146]: I0318 13:11:47.868454 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-certs\") pod \"machine-config-server-4f5s4\" (UID: \"02879f34-7062-4f07-9a5a-f965103d9182\") " pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:11:47.868580 master-0 kubenswrapper[7146]: I0318 13:11:47.868489 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99mks\" (UniqueName: \"kubernetes.io/projected/5a715e53-1874-4993-93d1-504c3470a6f5-kube-api-access-99mks\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:11:47.868580 master-0 kubenswrapper[7146]: I0318 13:11:47.868538 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-node-bootstrap-token\") pod \"machine-config-server-4f5s4\" (UID: \"02879f34-7062-4f07-9a5a-f965103d9182\") " pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:11:47.868580 master-0 kubenswrapper[7146]: I0318 13:11:47.868574 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbv4l\" (UniqueName: \"kubernetes.io/projected/02879f34-7062-4f07-9a5a-f965103d9182-kube-api-access-jbv4l\") pod \"machine-config-server-4f5s4\" (UID: \"02879f34-7062-4f07-9a5a-f965103d9182\") " pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:11:47.888559 master-0 kubenswrapper[7146]: I0318 13:11:47.888516 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager/0.log" Mar 18 13:11:47.970108 master-0 kubenswrapper[7146]: I0318 13:11:47.970046 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:11:47.970345 master-0 kubenswrapper[7146]: I0318 13:11:47.970141 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-certs\") pod \"machine-config-server-4f5s4\" (UID: \"02879f34-7062-4f07-9a5a-f965103d9182\") " pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:11:47.970345 master-0 kubenswrapper[7146]: I0318 13:11:47.970167 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99mks\" (UniqueName: \"kubernetes.io/projected/5a715e53-1874-4993-93d1-504c3470a6f5-kube-api-access-99mks\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:11:47.970345 master-0 kubenswrapper[7146]: I0318 13:11:47.970193 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-node-bootstrap-token\") pod \"machine-config-server-4f5s4\" (UID: \"02879f34-7062-4f07-9a5a-f965103d9182\") " pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:11:47.970345 master-0 kubenswrapper[7146]: I0318 13:11:47.970214 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbv4l\" (UniqueName: \"kubernetes.io/projected/02879f34-7062-4f07-9a5a-f965103d9182-kube-api-access-jbv4l\") pod \"machine-config-server-4f5s4\" (UID: \"02879f34-7062-4f07-9a5a-f965103d9182\") " pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:11:47.970345 master-0 kubenswrapper[7146]: I0318 13:11:47.970243 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5a715e53-1874-4993-93d1-504c3470a6f5-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:11:47.970345 master-0 kubenswrapper[7146]: I0318 13:11:47.970268 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:11:47.971680 master-0 kubenswrapper[7146]: I0318 13:11:47.971633 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5a715e53-1874-4993-93d1-504c3470a6f5-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:11:47.973357 master-0 kubenswrapper[7146]: I0318 13:11:47.973305 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:11:47.973808 master-0 kubenswrapper[7146]: I0318 13:11:47.973769 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:11:47.976491 master-0 kubenswrapper[7146]: I0318 13:11:47.976457 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-certs\") pod \"machine-config-server-4f5s4\" (UID: \"02879f34-7062-4f07-9a5a-f965103d9182\") " pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:11:47.981847 master-0 kubenswrapper[7146]: I0318 13:11:47.981802 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-node-bootstrap-token\") pod \"machine-config-server-4f5s4\" (UID: \"02879f34-7062-4f07-9a5a-f965103d9182\") " pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:11:47.993779 master-0 kubenswrapper[7146]: I0318 13:11:47.993730 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbv4l\" (UniqueName: \"kubernetes.io/projected/02879f34-7062-4f07-9a5a-f965103d9182-kube-api-access-jbv4l\") pod \"machine-config-server-4f5s4\" (UID: \"02879f34-7062-4f07-9a5a-f965103d9182\") " pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:11:47.994554 master-0 kubenswrapper[7146]: I0318 13:11:47.994502 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99mks\" (UniqueName: \"kubernetes.io/projected/5a715e53-1874-4993-93d1-504c3470a6f5-kube-api-access-99mks\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:11:48.033585 master-0 kubenswrapper[7146]: I0318 13:11:48.033430 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:11:48.088547 master-0 kubenswrapper[7146]: I0318 13:11:48.088375 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/cluster-policy-controller/0.log" Mar 18 13:11:48.110259 master-0 kubenswrapper[7146]: I0318 13:11:48.109795 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:11:48.290143 master-0 kubenswrapper[7146]: I0318 13:11:48.289094 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager-cert-syncer/0.log" Mar 18 13:11:48.325610 master-0 kubenswrapper[7146]: I0318 13:11:48.325573 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:48.328136 master-0 kubenswrapper[7146]: I0318 13:11:48.328109 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:48.328136 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:11:48.328136 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:11:48.328136 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:11:48.328309 master-0 kubenswrapper[7146]: I0318 13:11:48.328150 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:48.415046 master-0 kubenswrapper[7146]: I0318 13:11:48.414974 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2"] Mar 18 13:11:48.419343 master-0 kubenswrapper[7146]: W0318 13:11:48.419285 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a715e53_1874_4993_93d1_504c3470a6f5.slice/crio-22211cbad9660f7fa5d4af7845deabe61016d175198690a7f0bcdcb8c8f30f63 WatchSource:0}: Error finding container 22211cbad9660f7fa5d4af7845deabe61016d175198690a7f0bcdcb8c8f30f63: Status 404 returned error can't find the container with id 22211cbad9660f7fa5d4af7845deabe61016d175198690a7f0bcdcb8c8f30f63 Mar 18 13:11:48.487483 master-0 kubenswrapper[7146]: I0318 13:11:48.487440 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager-recovery-controller/0.log" Mar 18 13:11:48.517366 master-0 kubenswrapper[7146]: I0318 13:11:48.517311 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" event={"ID":"5a715e53-1874-4993-93d1-504c3470a6f5","Type":"ContainerStarted","Data":"22211cbad9660f7fa5d4af7845deabe61016d175198690a7f0bcdcb8c8f30f63"} Mar 18 13:11:48.518762 master-0 kubenswrapper[7146]: I0318 13:11:48.518740 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4f5s4" event={"ID":"02879f34-7062-4f07-9a5a-f965103d9182","Type":"ContainerStarted","Data":"0ef0c0a6a1e78fb8cf517df46f3190158fccc7763df2b4852ba1d5b63246cee8"} Mar 18 13:11:48.518807 master-0 kubenswrapper[7146]: I0318 13:11:48.518767 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4f5s4" event={"ID":"02879f34-7062-4f07-9a5a-f965103d9182","Type":"ContainerStarted","Data":"6ae0c5f6306fcc2bc4d200c31e8ec02db83741ac24faf2d432c77d6884f24b98"} Mar 18 13:11:48.535321 master-0 kubenswrapper[7146]: I0318 13:11:48.535245 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-4f5s4" podStartSLOduration=1.535221027 podStartE2EDuration="1.535221027s" podCreationTimestamp="2026-03-18 13:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:11:48.533043794 +0000 UTC m=+217.341261155" watchObservedRunningTime="2026-03-18 13:11:48.535221027 +0000 UTC m=+217.343438398" Mar 18 13:11:48.689107 master-0 kubenswrapper[7146]: I0318 13:11:48.689077 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-nqtlk_e2f2982b-2117-4c16-a4e3-f7e14c7ddc41/kube-controller-manager-operator/0.log" Mar 18 13:11:48.885022 master-0 kubenswrapper[7146]: I0318 13:11:48.884966 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-nqtlk_e2f2982b-2117-4c16-a4e3-f7e14c7ddc41/kube-controller-manager-operator/1.log" Mar 18 13:11:49.104038 master-0 kubenswrapper[7146]: I0318 13:11:49.103042 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_c83737980b9ee109184b1d78e942cf36/kube-scheduler/0.log" Mar 18 13:11:49.286651 master-0 kubenswrapper[7146]: I0318 13:11:49.286589 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_c83737980b9ee109184b1d78e942cf36/kube-scheduler/1.log" Mar 18 13:11:49.327348 master-0 kubenswrapper[7146]: I0318 13:11:49.327291 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:49.327348 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:11:49.327348 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:11:49.327348 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:11:49.327661 master-0 kubenswrapper[7146]: I0318 13:11:49.327353 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:49.489081 master-0 kubenswrapper[7146]: I0318 13:11:49.489045 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_245f3af1-ccfb-4191-9a34-00852e52a73d/installer/0.log" Mar 18 13:11:49.689391 master-0 kubenswrapper[7146]: I0318 13:11:49.689350 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-4bfbf_93ea3c78-dede-468f-89a5-551133f794c5/kube-scheduler-operator-container/0.log" Mar 18 13:11:49.907316 master-0 kubenswrapper[7146]: I0318 13:11:49.907089 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-4bfbf_93ea3c78-dede-468f-89a5-551133f794c5/kube-scheduler-operator-container/1.log" Mar 18 13:11:50.091895 master-0 kubenswrapper[7146]: I0318 13:11:50.091824 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-lwfvl_cb471665-2b07-48df-9881-3fb663390b23/openshift-apiserver-operator/0.log" Mar 18 13:11:50.285219 master-0 kubenswrapper[7146]: I0318 13:11:50.285164 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-574f6d5bf6-8krhk_b41c9132-92ef-429d-bdd5-9bdb024e04fc/fix-audit-permissions/0.log" Mar 18 13:11:50.328480 master-0 kubenswrapper[7146]: I0318 13:11:50.328426 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:50.328480 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:11:50.328480 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:11:50.328480 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:11:50.328651 master-0 kubenswrapper[7146]: I0318 13:11:50.328483 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:50.489308 master-0 kubenswrapper[7146]: I0318 13:11:50.489268 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-574f6d5bf6-8krhk_b41c9132-92ef-429d-bdd5-9bdb024e04fc/openshift-apiserver/0.log" Mar 18 13:11:50.535263 master-0 kubenswrapper[7146]: I0318 13:11:50.535206 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" event={"ID":"5a715e53-1874-4993-93d1-504c3470a6f5","Type":"ContainerStarted","Data":"b23de5825037a5b299ffba9d7128459e303f31dd9211c9a6d29b95ab124a7d09"} Mar 18 13:11:50.535263 master-0 kubenswrapper[7146]: I0318 13:11:50.535249 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" event={"ID":"5a715e53-1874-4993-93d1-504c3470a6f5","Type":"ContainerStarted","Data":"f5eadc393c83a23a7bb66932bed7e70f1f922f8a6c07948d21a0a20ebb724a60"} Mar 18 13:11:50.558597 master-0 kubenswrapper[7146]: I0318 13:11:50.558497 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" podStartSLOduration=1.908224242 podStartE2EDuration="3.558471473s" podCreationTimestamp="2026-03-18 13:11:47 +0000 UTC" firstStartedPulling="2026-03-18 13:11:48.421268568 +0000 UTC m=+217.229485929" lastFinishedPulling="2026-03-18 13:11:50.071515799 +0000 UTC m=+218.879733160" observedRunningTime="2026-03-18 13:11:50.554181199 +0000 UTC m=+219.362398560" watchObservedRunningTime="2026-03-18 13:11:50.558471473 +0000 UTC m=+219.366688844" Mar 18 13:11:50.688766 master-0 kubenswrapper[7146]: I0318 13:11:50.688698 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-574f6d5bf6-8krhk_b41c9132-92ef-429d-bdd5-9bdb024e04fc/openshift-apiserver-check-endpoints/0.log" Mar 18 13:11:50.891456 master-0 kubenswrapper[7146]: I0318 13:11:50.891344 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-hmbpl_1bf0ea4e-8b08-488f-b252-39580f46b756/etcd-operator/0.log" Mar 18 13:11:51.086376 master-0 kubenswrapper[7146]: I0318 13:11:51.086292 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-hmbpl_1bf0ea4e-8b08-488f-b252-39580f46b756/etcd-operator/1.log" Mar 18 13:11:51.289381 master-0 kubenswrapper[7146]: I0318 13:11:51.289328 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-p9k56_47f82c03-65d1-4a6c-ba09-8a00ae778009/catalog-operator/0.log" Mar 18 13:11:51.328488 master-0 kubenswrapper[7146]: I0318 13:11:51.328430 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:51.328488 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:11:51.328488 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:11:51.328488 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:11:51.328869 master-0 kubenswrapper[7146]: I0318 13:11:51.328534 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:51.493187 master-0 kubenswrapper[7146]: I0318 13:11:51.493101 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-5c9796789-8r4hr_35925474-e3fe-4cff-aad6-d853816618c7/olm-operator/0.log" Mar 18 13:11:51.686059 master-0 kubenswrapper[7146]: I0318 13:11:51.685917 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-kbpvr_36db10b8-33a2-4b54-85e2-9809eb6bc37d/kube-rbac-proxy/0.log" Mar 18 13:11:51.896609 master-0 kubenswrapper[7146]: I0318 13:11:51.896576 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-kbpvr_36db10b8-33a2-4b54-85e2-9809eb6bc37d/package-server-manager/0.log" Mar 18 13:11:52.045402 master-0 kubenswrapper[7146]: I0318 13:11:52.045349 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9"] Mar 18 13:11:52.046537 master-0 kubenswrapper[7146]: I0318 13:11:52.046506 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:11:52.050278 master-0 kubenswrapper[7146]: I0318 13:11:52.050245 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-wbjcs" Mar 18 13:11:52.050278 master-0 kubenswrapper[7146]: I0318 13:11:52.050263 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 13:11:52.050422 master-0 kubenswrapper[7146]: I0318 13:11:52.050245 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 13:11:52.050793 master-0 kubenswrapper[7146]: I0318 13:11:52.050744 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-f55c6"] Mar 18 13:11:52.052269 master-0 kubenswrapper[7146]: I0318 13:11:52.052235 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.053841 master-0 kubenswrapper[7146]: I0318 13:11:52.053809 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 13:11:52.054343 master-0 kubenswrapper[7146]: I0318 13:11:52.054311 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-qmkzj" Mar 18 13:11:52.057625 master-0 kubenswrapper[7146]: I0318 13:11:52.057595 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 13:11:52.061141 master-0 kubenswrapper[7146]: I0318 13:11:52.061089 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-dldw9"] Mar 18 13:11:52.062492 master-0 kubenswrapper[7146]: I0318 13:11:52.062444 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.065320 master-0 kubenswrapper[7146]: I0318 13:11:52.065285 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9"] Mar 18 13:11:52.075655 master-0 kubenswrapper[7146]: I0318 13:11:52.075616 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 13:11:52.076061 master-0 kubenswrapper[7146]: I0318 13:11:52.076032 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 13:11:52.076267 master-0 kubenswrapper[7146]: I0318 13:11:52.076228 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-x4c9n" Mar 18 13:11:52.076267 master-0 kubenswrapper[7146]: I0318 13:11:52.076254 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 13:11:52.101548 master-0 kubenswrapper[7146]: I0318 13:11:52.101501 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-5dccbdd8cc-pw7vm_375d5112-d2be-47cf-bee1-82614ba71ed8/packageserver/0.log" Mar 18 13:11:52.103732 master-0 kubenswrapper[7146]: I0318 13:11:52.103688 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-dldw9"] Mar 18 13:11:52.146183 master-0 kubenswrapper[7146]: I0318 13:11:52.146138 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:11:52.146508 master-0 kubenswrapper[7146]: I0318 13:11:52.146462 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-tls\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.146629 master-0 kubenswrapper[7146]: I0318 13:11:52.146608 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.146741 master-0 kubenswrapper[7146]: I0318 13:11:52.146722 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2msq\" (UniqueName: \"kubernetes.io/projected/b856d226-a137-4954-82c5-5929d579387a-kube-api-access-n2msq\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.146875 master-0 kubenswrapper[7146]: I0318 13:11:52.146855 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:11:52.147017 master-0 kubenswrapper[7146]: I0318 13:11:52.146996 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6ed4f640-d481-4e7a-92eb-f0eda82e138c-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.147143 master-0 kubenswrapper[7146]: I0318 13:11:52.147121 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.147255 master-0 kubenswrapper[7146]: I0318 13:11:52.147236 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b856d226-a137-4954-82c5-5929d579387a-metrics-client-ca\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.147375 master-0 kubenswrapper[7146]: I0318 13:11:52.147356 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s9rk\" (UniqueName: \"kubernetes.io/projected/3c0d0048-6d96-459c-8742-2f092af44a6a-kube-api-access-2s9rk\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:11:52.147502 master-0 kubenswrapper[7146]: I0318 13:11:52.147478 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.147634 master-0 kubenswrapper[7146]: I0318 13:11:52.147616 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-node-exporter-wtmp\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.147748 master-0 kubenswrapper[7146]: I0318 13:11:52.147731 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-root\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.147886 master-0 kubenswrapper[7146]: I0318 13:11:52.147869 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.148018 master-0 kubenswrapper[7146]: I0318 13:11:52.147999 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhmmv\" (UniqueName: \"kubernetes.io/projected/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-api-access-xhmmv\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.148158 master-0 kubenswrapper[7146]: I0318 13:11:52.148141 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3c0d0048-6d96-459c-8742-2f092af44a6a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:11:52.148263 master-0 kubenswrapper[7146]: I0318 13:11:52.148244 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b856d226-a137-4954-82c5-5929d579387a-node-exporter-textfile\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.148364 master-0 kubenswrapper[7146]: I0318 13:11:52.148347 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.148475 master-0 kubenswrapper[7146]: I0318 13:11:52.148459 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-sys\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.249244 master-0 kubenswrapper[7146]: I0318 13:11:52.249159 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-root\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.249244 master-0 kubenswrapper[7146]: I0318 13:11:52.249239 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.249574 master-0 kubenswrapper[7146]: I0318 13:11:52.249331 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-root\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.249574 master-0 kubenswrapper[7146]: I0318 13:11:52.249372 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhmmv\" (UniqueName: \"kubernetes.io/projected/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-api-access-xhmmv\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.249664 master-0 kubenswrapper[7146]: I0318 13:11:52.249574 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3c0d0048-6d96-459c-8742-2f092af44a6a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:11:52.249664 master-0 kubenswrapper[7146]: I0318 13:11:52.249628 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b856d226-a137-4954-82c5-5929d579387a-node-exporter-textfile\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.249664 master-0 kubenswrapper[7146]: I0318 13:11:52.249655 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.249773 master-0 kubenswrapper[7146]: I0318 13:11:52.249690 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-sys\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.249773 master-0 kubenswrapper[7146]: I0318 13:11:52.249756 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-tls\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.249849 master-0 kubenswrapper[7146]: I0318 13:11:52.249780 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:11:52.249849 master-0 kubenswrapper[7146]: I0318 13:11:52.249816 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.249930 master-0 kubenswrapper[7146]: I0318 13:11:52.249870 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-sys\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.250203 master-0 kubenswrapper[7146]: I0318 13:11:52.250173 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b856d226-a137-4954-82c5-5929d579387a-node-exporter-textfile\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.250264 master-0 kubenswrapper[7146]: I0318 13:11:52.250225 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2msq\" (UniqueName: \"kubernetes.io/projected/b856d226-a137-4954-82c5-5929d579387a-kube-api-access-n2msq\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.250264 master-0 kubenswrapper[7146]: I0318 13:11:52.250257 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:11:52.250351 master-0 kubenswrapper[7146]: I0318 13:11:52.250279 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6ed4f640-d481-4e7a-92eb-f0eda82e138c-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.250351 master-0 kubenswrapper[7146]: I0318 13:11:52.250305 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.250351 master-0 kubenswrapper[7146]: I0318 13:11:52.250326 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b856d226-a137-4954-82c5-5929d579387a-metrics-client-ca\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.250351 master-0 kubenswrapper[7146]: I0318 13:11:52.250349 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s9rk\" (UniqueName: \"kubernetes.io/projected/3c0d0048-6d96-459c-8742-2f092af44a6a-kube-api-access-2s9rk\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:11:52.250510 master-0 kubenswrapper[7146]: I0318 13:11:52.250368 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.250510 master-0 kubenswrapper[7146]: I0318 13:11:52.250396 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-node-exporter-wtmp\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.250589 master-0 kubenswrapper[7146]: I0318 13:11:52.250553 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-node-exporter-wtmp\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.251525 master-0 kubenswrapper[7146]: I0318 13:11:52.250850 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3c0d0048-6d96-459c-8742-2f092af44a6a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:11:52.251525 master-0 kubenswrapper[7146]: I0318 13:11:52.250949 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.251525 master-0 kubenswrapper[7146]: I0318 13:11:52.251273 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6ed4f640-d481-4e7a-92eb-f0eda82e138c-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.253144 master-0 kubenswrapper[7146]: I0318 13:11:52.253112 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.254343 master-0 kubenswrapper[7146]: I0318 13:11:52.253999 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:11:52.255596 master-0 kubenswrapper[7146]: I0318 13:11:52.255550 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b856d226-a137-4954-82c5-5929d579387a-metrics-client-ca\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.255783 master-0 kubenswrapper[7146]: I0318 13:11:52.255759 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.255904 master-0 kubenswrapper[7146]: I0318 13:11:52.255869 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:11:52.259429 master-0 kubenswrapper[7146]: I0318 13:11:52.259384 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-tls\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.259610 master-0 kubenswrapper[7146]: I0318 13:11:52.259588 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.261111 master-0 kubenswrapper[7146]: I0318 13:11:52.261082 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.269683 master-0 kubenswrapper[7146]: I0318 13:11:52.269634 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s9rk\" (UniqueName: \"kubernetes.io/projected/3c0d0048-6d96-459c-8742-2f092af44a6a-kube-api-access-2s9rk\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:11:52.272217 master-0 kubenswrapper[7146]: I0318 13:11:52.272161 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhmmv\" (UniqueName: \"kubernetes.io/projected/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-api-access-xhmmv\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.273844 master-0 kubenswrapper[7146]: I0318 13:11:52.273807 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2msq\" (UniqueName: \"kubernetes.io/projected/b856d226-a137-4954-82c5-5929d579387a-kube-api-access-n2msq\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.328042 master-0 kubenswrapper[7146]: I0318 13:11:52.327725 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:52.328042 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:11:52.328042 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:11:52.328042 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:11:52.328042 master-0 kubenswrapper[7146]: I0318 13:11:52.327778 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:52.363755 master-0 kubenswrapper[7146]: I0318 13:11:52.363125 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:11:52.379683 master-0 kubenswrapper[7146]: I0318 13:11:52.379637 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:11:52.397120 master-0 kubenswrapper[7146]: W0318 13:11:52.397072 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb856d226_a137_4954_82c5_5929d579387a.slice/crio-e11212398038431cec2938c7b56aa6395f70dc7ec5d7eb01558cbbe8ba561643 WatchSource:0}: Error finding container e11212398038431cec2938c7b56aa6395f70dc7ec5d7eb01558cbbe8ba561643: Status 404 returned error can't find the container with id e11212398038431cec2938c7b56aa6395f70dc7ec5d7eb01558cbbe8ba561643 Mar 18 13:11:52.403069 master-0 kubenswrapper[7146]: I0318 13:11:52.403030 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:11:52.554415 master-0 kubenswrapper[7146]: I0318 13:11:52.554366 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-f55c6" event={"ID":"b856d226-a137-4954-82c5-5929d579387a","Type":"ContainerStarted","Data":"e11212398038431cec2938c7b56aa6395f70dc7ec5d7eb01558cbbe8ba561643"} Mar 18 13:11:52.782442 master-0 kubenswrapper[7146]: I0318 13:11:52.782380 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9"] Mar 18 13:11:52.870894 master-0 kubenswrapper[7146]: I0318 13:11:52.870551 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-dldw9"] Mar 18 13:11:53.327775 master-0 kubenswrapper[7146]: I0318 13:11:53.327727 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:53.327775 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:11:53.327775 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:11:53.327775 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:11:53.328408 master-0 kubenswrapper[7146]: I0318 13:11:53.327798 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:53.560694 master-0 kubenswrapper[7146]: I0318 13:11:53.560628 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" event={"ID":"6ed4f640-d481-4e7a-92eb-f0eda82e138c","Type":"ContainerStarted","Data":"f257d90986f3bc5c917783e713efe22ea2b8502b23f0e13b32408883ab3d2ef8"} Mar 18 13:11:53.562814 master-0 kubenswrapper[7146]: I0318 13:11:53.562784 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" event={"ID":"3c0d0048-6d96-459c-8742-2f092af44a6a","Type":"ContainerStarted","Data":"900289bab5482922973fb57cb6f94cb9da6434b85774d94b6eaffe519b31c200"} Mar 18 13:11:53.562858 master-0 kubenswrapper[7146]: I0318 13:11:53.562817 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" event={"ID":"3c0d0048-6d96-459c-8742-2f092af44a6a","Type":"ContainerStarted","Data":"904e4e387ad013b8ddb401b36b6742fef5b6785ce4266ffd7234d2228c2b2143"} Mar 18 13:11:53.562858 master-0 kubenswrapper[7146]: I0318 13:11:53.562834 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" event={"ID":"3c0d0048-6d96-459c-8742-2f092af44a6a","Type":"ContainerStarted","Data":"dfe654b41556fae7663227362582c9c8b439e29f071dbdc91344f393aa640b68"} Mar 18 13:11:54.325819 master-0 kubenswrapper[7146]: I0318 13:11:54.325735 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:11:54.328326 master-0 kubenswrapper[7146]: I0318 13:11:54.328257 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:54.328326 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:11:54.328326 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:11:54.328326 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:11:54.328326 master-0 kubenswrapper[7146]: I0318 13:11:54.328313 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:55.327806 master-0 kubenswrapper[7146]: I0318 13:11:55.327767 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:55.327806 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:11:55.327806 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:11:55.327806 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:11:55.328166 master-0 kubenswrapper[7146]: I0318 13:11:55.328141 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:56.328133 master-0 kubenswrapper[7146]: I0318 13:11:56.328004 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:56.328133 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:11:56.328133 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:11:56.328133 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:11:56.328703 master-0 kubenswrapper[7146]: I0318 13:11:56.328143 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:57.328071 master-0 kubenswrapper[7146]: I0318 13:11:57.327978 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:57.328071 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:11:57.328071 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:11:57.328071 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:11:57.328786 master-0 kubenswrapper[7146]: I0318 13:11:57.328077 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:57.589621 master-0 kubenswrapper[7146]: I0318 13:11:57.589436 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" event={"ID":"6ed4f640-d481-4e7a-92eb-f0eda82e138c","Type":"ContainerStarted","Data":"86925f867d511d4cdb2490997eb054e68c4dc9aba928344396a7492069135ebd"} Mar 18 13:11:57.589621 master-0 kubenswrapper[7146]: I0318 13:11:57.589503 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" event={"ID":"6ed4f640-d481-4e7a-92eb-f0eda82e138c","Type":"ContainerStarted","Data":"834c871831256d6c69973c1d0a0f35c651c50b5b81ff267c4537a4bda05849ac"} Mar 18 13:11:57.589621 master-0 kubenswrapper[7146]: I0318 13:11:57.589524 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" event={"ID":"6ed4f640-d481-4e7a-92eb-f0eda82e138c","Type":"ContainerStarted","Data":"9f12463433d4b06c551f9dce957d911b3e1a22513f9a706c6f67dc347c28b34a"} Mar 18 13:11:57.592251 master-0 kubenswrapper[7146]: I0318 13:11:57.592209 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" event={"ID":"3c0d0048-6d96-459c-8742-2f092af44a6a","Type":"ContainerStarted","Data":"3aeb35a0f16d4f7a20230191b3393082ce9a3dc414450f59f3d1cdd618fc6464"} Mar 18 13:11:57.600845 master-0 kubenswrapper[7146]: I0318 13:11:57.600768 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-648866dd9c-ztkrd"] Mar 18 13:11:57.602021 master-0 kubenswrapper[7146]: I0318 13:11:57.601985 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.604810 master-0 kubenswrapper[7146]: I0318 13:11:57.604766 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 13:11:57.604920 master-0 kubenswrapper[7146]: I0318 13:11:57.604894 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-jlgxc" Mar 18 13:11:57.605036 master-0 kubenswrapper[7146]: I0318 13:11:57.605005 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2dpn1smcfbjnb" Mar 18 13:11:57.605096 master-0 kubenswrapper[7146]: I0318 13:11:57.605037 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 13:11:57.605257 master-0 kubenswrapper[7146]: I0318 13:11:57.605231 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 13:11:57.605637 master-0 kubenswrapper[7146]: I0318 13:11:57.605604 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 13:11:57.623067 master-0 kubenswrapper[7146]: I0318 13:11:57.622998 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" podStartSLOduration=1.768339118 podStartE2EDuration="5.622978194s" podCreationTimestamp="2026-03-18 13:11:52 +0000 UTC" firstStartedPulling="2026-03-18 13:11:52.875369975 +0000 UTC m=+221.683587336" lastFinishedPulling="2026-03-18 13:11:56.730009051 +0000 UTC m=+225.538226412" observedRunningTime="2026-03-18 13:11:57.619541695 +0000 UTC m=+226.427759166" watchObservedRunningTime="2026-03-18 13:11:57.622978194 +0000 UTC m=+226.431195565" Mar 18 13:11:57.630205 master-0 kubenswrapper[7146]: I0318 13:11:57.630142 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-648866dd9c-ztkrd"] Mar 18 13:11:57.672930 master-0 kubenswrapper[7146]: I0318 13:11:57.672842 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" podStartSLOduration=2.009972673 podStartE2EDuration="5.672799862s" podCreationTimestamp="2026-03-18 13:11:52 +0000 UTC" firstStartedPulling="2026-03-18 13:11:53.062729813 +0000 UTC m=+221.870947184" lastFinishedPulling="2026-03-18 13:11:56.725557012 +0000 UTC m=+225.533774373" observedRunningTime="2026-03-18 13:11:57.670054633 +0000 UTC m=+226.478272014" watchObservedRunningTime="2026-03-18 13:11:57.672799862 +0000 UTC m=+226.481017233" Mar 18 13:11:57.746753 master-0 kubenswrapper[7146]: I0318 13:11:57.746681 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-metrics-server-audit-profiles\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.747077 master-0 kubenswrapper[7146]: I0318 13:11:57.747024 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpdw6\" (UniqueName: \"kubernetes.io/projected/b79758b7-9129-496c-abec-80d455648454-kube-api-access-lpdw6\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.747718 master-0 kubenswrapper[7146]: I0318 13:11:57.747311 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b79758b7-9129-496c-abec-80d455648454-audit-log\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.747718 master-0 kubenswrapper[7146]: I0318 13:11:57.747444 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-client-certs\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.747718 master-0 kubenswrapper[7146]: I0318 13:11:57.747527 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.747951 master-0 kubenswrapper[7146]: I0318 13:11:57.747721 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-server-tls\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.747951 master-0 kubenswrapper[7146]: I0318 13:11:57.747815 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.849714 master-0 kubenswrapper[7146]: I0318 13:11:57.849583 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.849714 master-0 kubenswrapper[7146]: I0318 13:11:57.849666 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-server-tls\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.849714 master-0 kubenswrapper[7146]: I0318 13:11:57.849709 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.849982 master-0 kubenswrapper[7146]: I0318 13:11:57.849801 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-metrics-server-audit-profiles\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.849982 master-0 kubenswrapper[7146]: I0318 13:11:57.849869 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpdw6\" (UniqueName: \"kubernetes.io/projected/b79758b7-9129-496c-abec-80d455648454-kube-api-access-lpdw6\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.849982 master-0 kubenswrapper[7146]: I0318 13:11:57.849965 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b79758b7-9129-496c-abec-80d455648454-audit-log\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.850072 master-0 kubenswrapper[7146]: I0318 13:11:57.850002 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-client-certs\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.853963 master-0 kubenswrapper[7146]: I0318 13:11:57.851151 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b79758b7-9129-496c-abec-80d455648454-audit-log\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.853963 master-0 kubenswrapper[7146]: I0318 13:11:57.851547 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.853963 master-0 kubenswrapper[7146]: I0318 13:11:57.851748 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-metrics-server-audit-profiles\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.855536 master-0 kubenswrapper[7146]: I0318 13:11:57.855490 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-client-certs\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.864959 master-0 kubenswrapper[7146]: I0318 13:11:57.864289 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-server-tls\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.876959 master-0 kubenswrapper[7146]: I0318 13:11:57.875206 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.885961 master-0 kubenswrapper[7146]: I0318 13:11:57.882963 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpdw6\" (UniqueName: \"kubernetes.io/projected/b79758b7-9129-496c-abec-80d455648454-kube-api-access-lpdw6\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:57.932961 master-0 kubenswrapper[7146]: I0318 13:11:57.930456 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:11:58.328084 master-0 kubenswrapper[7146]: I0318 13:11:58.327786 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:58.328084 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:11:58.328084 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:11:58.328084 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:11:58.328084 master-0 kubenswrapper[7146]: I0318 13:11:58.327897 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:58.507422 master-0 kubenswrapper[7146]: I0318 13:11:58.506931 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-648866dd9c-ztkrd"] Mar 18 13:11:58.525136 master-0 kubenswrapper[7146]: W0318 13:11:58.525087 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb79758b7_9129_496c_abec_80d455648454.slice/crio-f8cc997e3f27ce3fc910341ff80d8b564acb4ef4acb174e7ab70b72471e906fc WatchSource:0}: Error finding container f8cc997e3f27ce3fc910341ff80d8b564acb4ef4acb174e7ab70b72471e906fc: Status 404 returned error can't find the container with id f8cc997e3f27ce3fc910341ff80d8b564acb4ef4acb174e7ab70b72471e906fc Mar 18 13:11:58.598559 master-0 kubenswrapper[7146]: I0318 13:11:58.598492 7146 generic.go:334] "Generic (PLEG): container finished" podID="b856d226-a137-4954-82c5-5929d579387a" containerID="9d044af973bd01a08e8fcad763eafdffa737337304ddc7ac842ceb7418ae0dec" exitCode=0 Mar 18 13:11:58.598817 master-0 kubenswrapper[7146]: I0318 13:11:58.598583 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-f55c6" event={"ID":"b856d226-a137-4954-82c5-5929d579387a","Type":"ContainerDied","Data":"9d044af973bd01a08e8fcad763eafdffa737337304ddc7ac842ceb7418ae0dec"} Mar 18 13:11:58.600411 master-0 kubenswrapper[7146]: I0318 13:11:58.600374 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" event={"ID":"b79758b7-9129-496c-abec-80d455648454","Type":"ContainerStarted","Data":"f8cc997e3f27ce3fc910341ff80d8b564acb4ef4acb174e7ab70b72471e906fc"} Mar 18 13:11:59.328167 master-0 kubenswrapper[7146]: I0318 13:11:59.327924 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:59.328167 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:11:59.328167 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:11:59.328167 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:11:59.328476 master-0 kubenswrapper[7146]: I0318 13:11:59.328218 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:59.605976 master-0 kubenswrapper[7146]: I0318 13:11:59.605687 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-f55c6" event={"ID":"b856d226-a137-4954-82c5-5929d579387a","Type":"ContainerStarted","Data":"a715d1041132befe33f459bb71dd5fa59bfbbd3377f14836f8f0f2e66d54df7e"} Mar 18 13:11:59.605976 master-0 kubenswrapper[7146]: I0318 13:11:59.605732 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-f55c6" event={"ID":"b856d226-a137-4954-82c5-5929d579387a","Type":"ContainerStarted","Data":"eac9755f14a7e2f7e2ac3fcd7c7feb0c66243a4e93b8ec743da07fbddf495e28"} Mar 18 13:11:59.635989 master-0 kubenswrapper[7146]: I0318 13:11:59.635842 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-f55c6" podStartSLOduration=1.863049102 podStartE2EDuration="7.63582286s" podCreationTimestamp="2026-03-18 13:11:52 +0000 UTC" firstStartedPulling="2026-03-18 13:11:52.399385557 +0000 UTC m=+221.207602918" lastFinishedPulling="2026-03-18 13:11:58.172159325 +0000 UTC m=+226.980376676" observedRunningTime="2026-03-18 13:11:59.632245457 +0000 UTC m=+228.440462818" watchObservedRunningTime="2026-03-18 13:11:59.63582286 +0000 UTC m=+228.444040231" Mar 18 13:12:00.327164 master-0 kubenswrapper[7146]: I0318 13:12:00.327121 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:00.327164 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:00.327164 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:00.327164 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:00.327164 master-0 kubenswrapper[7146]: I0318 13:12:00.327168 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:00.614626 master-0 kubenswrapper[7146]: I0318 13:12:00.614485 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" event={"ID":"b79758b7-9129-496c-abec-80d455648454","Type":"ContainerStarted","Data":"6e8ae0a076c41e3f18b14641ea28ac3afecdeb82b78e2928a3c0fb7c0dd943cb"} Mar 18 13:12:00.638540 master-0 kubenswrapper[7146]: I0318 13:12:00.638442 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" podStartSLOduration=1.7723743079999998 podStartE2EDuration="3.638416848s" podCreationTimestamp="2026-03-18 13:11:57 +0000 UTC" firstStartedPulling="2026-03-18 13:11:58.526983486 +0000 UTC m=+227.335200847" lastFinishedPulling="2026-03-18 13:12:00.393026036 +0000 UTC m=+229.201243387" observedRunningTime="2026-03-18 13:12:00.635521755 +0000 UTC m=+229.443739116" watchObservedRunningTime="2026-03-18 13:12:00.638416848 +0000 UTC m=+229.446634219" Mar 18 13:12:01.328059 master-0 kubenswrapper[7146]: I0318 13:12:01.327997 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:01.328059 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:01.328059 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:01.328059 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:01.328320 master-0 kubenswrapper[7146]: I0318 13:12:01.328093 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:02.327577 master-0 kubenswrapper[7146]: I0318 13:12:02.327528 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:02.327577 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:02.327577 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:02.327577 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:02.328285 master-0 kubenswrapper[7146]: I0318 13:12:02.328255 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:03.327783 master-0 kubenswrapper[7146]: I0318 13:12:03.327722 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:03.327783 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:03.327783 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:03.327783 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:03.328441 master-0 kubenswrapper[7146]: I0318 13:12:03.328415 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:04.328603 master-0 kubenswrapper[7146]: I0318 13:12:04.328542 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:04.328603 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:04.328603 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:04.328603 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:04.329278 master-0 kubenswrapper[7146]: I0318 13:12:04.328619 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:05.327764 master-0 kubenswrapper[7146]: I0318 13:12:05.327688 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:05.327764 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:05.327764 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:05.327764 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:05.328025 master-0 kubenswrapper[7146]: I0318 13:12:05.327791 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:06.328979 master-0 kubenswrapper[7146]: I0318 13:12:06.327857 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:06.328979 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:06.328979 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:06.328979 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:06.328979 master-0 kubenswrapper[7146]: I0318 13:12:06.327930 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:07.327472 master-0 kubenswrapper[7146]: I0318 13:12:07.327417 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:07.327472 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:07.327472 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:07.327472 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:07.327799 master-0 kubenswrapper[7146]: I0318 13:12:07.327480 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:08.328074 master-0 kubenswrapper[7146]: I0318 13:12:08.327998 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:08.328074 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:08.328074 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:08.328074 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:08.328784 master-0 kubenswrapper[7146]: I0318 13:12:08.328091 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:09.328328 master-0 kubenswrapper[7146]: I0318 13:12:09.328280 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:09.328328 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:09.328328 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:09.328328 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:09.329020 master-0 kubenswrapper[7146]: I0318 13:12:09.328980 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:10.328791 master-0 kubenswrapper[7146]: I0318 13:12:10.328716 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:10.328791 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:10.328791 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:10.328791 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:10.329522 master-0 kubenswrapper[7146]: I0318 13:12:10.328810 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:11.327020 master-0 kubenswrapper[7146]: I0318 13:12:11.326913 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:11.327020 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:11.327020 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:11.327020 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:11.327371 master-0 kubenswrapper[7146]: I0318 13:12:11.327030 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:12.328721 master-0 kubenswrapper[7146]: I0318 13:12:12.328656 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:12.328721 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:12.328721 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:12.328721 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:12.329841 master-0 kubenswrapper[7146]: I0318 13:12:12.329789 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:13.328358 master-0 kubenswrapper[7146]: I0318 13:12:13.328304 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:13.328358 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:13.328358 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:13.328358 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:13.328633 master-0 kubenswrapper[7146]: I0318 13:12:13.328392 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:14.327458 master-0 kubenswrapper[7146]: I0318 13:12:14.327387 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:14.327458 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:14.327458 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:14.327458 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:14.328036 master-0 kubenswrapper[7146]: I0318 13:12:14.327482 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:15.329045 master-0 kubenswrapper[7146]: I0318 13:12:15.328932 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:15.329045 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:15.329045 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:15.329045 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:15.329858 master-0 kubenswrapper[7146]: I0318 13:12:15.329070 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:16.328505 master-0 kubenswrapper[7146]: I0318 13:12:16.328431 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:16.328505 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:16.328505 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:16.328505 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:16.328862 master-0 kubenswrapper[7146]: I0318 13:12:16.328559 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:17.329811 master-0 kubenswrapper[7146]: I0318 13:12:17.329068 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:17.329811 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:17.329811 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:17.329811 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:17.329811 master-0 kubenswrapper[7146]: I0318 13:12:17.329181 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:17.931171 master-0 kubenswrapper[7146]: I0318 13:12:17.931134 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:12:17.931444 master-0 kubenswrapper[7146]: I0318 13:12:17.931431 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:12:18.328392 master-0 kubenswrapper[7146]: I0318 13:12:18.328320 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:18.328392 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:18.328392 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:18.328392 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:18.329019 master-0 kubenswrapper[7146]: I0318 13:12:18.328409 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:19.328428 master-0 kubenswrapper[7146]: I0318 13:12:19.328341 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:19.328428 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:19.328428 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:19.328428 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:19.329009 master-0 kubenswrapper[7146]: I0318 13:12:19.328532 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:20.328596 master-0 kubenswrapper[7146]: I0318 13:12:20.328536 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:20.328596 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:20.328596 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:20.328596 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:20.329358 master-0 kubenswrapper[7146]: I0318 13:12:20.328611 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:21.328867 master-0 kubenswrapper[7146]: I0318 13:12:21.328790 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:21.328867 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:21.328867 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:21.328867 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:21.328867 master-0 kubenswrapper[7146]: I0318 13:12:21.328860 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:22.328404 master-0 kubenswrapper[7146]: I0318 13:12:22.328346 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:22.328404 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:22.328404 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:22.328404 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:22.328716 master-0 kubenswrapper[7146]: I0318 13:12:22.328411 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:23.327100 master-0 kubenswrapper[7146]: I0318 13:12:23.327028 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:23.327100 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:23.327100 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:23.327100 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:23.327100 master-0 kubenswrapper[7146]: I0318 13:12:23.327095 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:24.329587 master-0 kubenswrapper[7146]: I0318 13:12:24.329491 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:24.329587 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:24.329587 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:24.329587 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:24.329587 master-0 kubenswrapper[7146]: I0318 13:12:24.329594 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:25.327610 master-0 kubenswrapper[7146]: I0318 13:12:25.327533 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:25.327610 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:25.327610 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:25.327610 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:25.328204 master-0 kubenswrapper[7146]: I0318 13:12:25.328157 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:26.328136 master-0 kubenswrapper[7146]: I0318 13:12:26.328070 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:26.328136 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:26.328136 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:26.328136 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:26.329170 master-0 kubenswrapper[7146]: I0318 13:12:26.329063 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:27.327310 master-0 kubenswrapper[7146]: I0318 13:12:27.327238 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:27.327310 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:27.327310 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:27.327310 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:27.327723 master-0 kubenswrapper[7146]: I0318 13:12:27.327446 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:28.328196 master-0 kubenswrapper[7146]: I0318 13:12:28.327927 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:28.328196 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:28.328196 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:28.328196 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:28.328196 master-0 kubenswrapper[7146]: I0318 13:12:28.328016 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:29.327727 master-0 kubenswrapper[7146]: I0318 13:12:29.327672 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:29.327727 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:29.327727 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:29.327727 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:29.328115 master-0 kubenswrapper[7146]: I0318 13:12:29.327740 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:30.328630 master-0 kubenswrapper[7146]: I0318 13:12:30.328552 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:30.328630 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:30.328630 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:30.328630 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:30.328630 master-0 kubenswrapper[7146]: I0318 13:12:30.328632 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:31.328147 master-0 kubenswrapper[7146]: I0318 13:12:31.328047 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:31.328147 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:31.328147 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:31.328147 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:31.329140 master-0 kubenswrapper[7146]: I0318 13:12:31.328188 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:32.327647 master-0 kubenswrapper[7146]: I0318 13:12:32.327583 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:32.327647 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:32.327647 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:32.327647 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:32.328021 master-0 kubenswrapper[7146]: I0318 13:12:32.327674 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:33.328166 master-0 kubenswrapper[7146]: I0318 13:12:33.328106 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:33.328166 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:33.328166 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:33.328166 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:33.328705 master-0 kubenswrapper[7146]: I0318 13:12:33.328174 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:34.328278 master-0 kubenswrapper[7146]: I0318 13:12:34.328221 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:34.328278 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:34.328278 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:34.328278 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:34.328922 master-0 kubenswrapper[7146]: I0318 13:12:34.328298 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:35.327617 master-0 kubenswrapper[7146]: I0318 13:12:35.327546 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:35.327617 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:35.327617 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:35.327617 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:35.327891 master-0 kubenswrapper[7146]: I0318 13:12:35.327627 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:36.330389 master-0 kubenswrapper[7146]: I0318 13:12:36.330321 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:36.330389 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:36.330389 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:36.330389 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:36.330991 master-0 kubenswrapper[7146]: I0318 13:12:36.330404 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:37.327838 master-0 kubenswrapper[7146]: I0318 13:12:37.327789 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:37.327838 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:37.327838 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:37.327838 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:37.328236 master-0 kubenswrapper[7146]: I0318 13:12:37.328209 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:37.938611 master-0 kubenswrapper[7146]: I0318 13:12:37.938548 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:12:37.947440 master-0 kubenswrapper[7146]: I0318 13:12:37.947378 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:12:38.329839 master-0 kubenswrapper[7146]: I0318 13:12:38.329690 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:38.329839 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:38.329839 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:38.329839 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:38.329839 master-0 kubenswrapper[7146]: I0318 13:12:38.329776 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:39.328845 master-0 kubenswrapper[7146]: I0318 13:12:39.328761 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:39.328845 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:39.328845 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:39.328845 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:39.329560 master-0 kubenswrapper[7146]: I0318 13:12:39.328875 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:40.327387 master-0 kubenswrapper[7146]: I0318 13:12:40.327245 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:40.327387 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:40.327387 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:40.327387 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:40.327387 master-0 kubenswrapper[7146]: I0318 13:12:40.327340 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:41.328631 master-0 kubenswrapper[7146]: I0318 13:12:41.328540 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:41.328631 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:41.328631 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:41.328631 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:41.329386 master-0 kubenswrapper[7146]: I0318 13:12:41.328666 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:42.329322 master-0 kubenswrapper[7146]: I0318 13:12:42.329239 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:42.329322 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:42.329322 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:42.329322 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:42.330233 master-0 kubenswrapper[7146]: I0318 13:12:42.330194 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:43.328372 master-0 kubenswrapper[7146]: I0318 13:12:43.328296 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:43.328372 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:43.328372 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:43.328372 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:43.328644 master-0 kubenswrapper[7146]: I0318 13:12:43.328411 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:44.329483 master-0 kubenswrapper[7146]: I0318 13:12:44.329384 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:44.329483 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:44.329483 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:44.329483 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:44.329483 master-0 kubenswrapper[7146]: I0318 13:12:44.329461 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:45.328437 master-0 kubenswrapper[7146]: I0318 13:12:45.328284 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:45.328437 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:45.328437 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:45.328437 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:45.328906 master-0 kubenswrapper[7146]: I0318 13:12:45.328455 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:46.328600 master-0 kubenswrapper[7146]: I0318 13:12:46.328513 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:46.328600 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:46.328600 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:46.328600 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:46.328600 master-0 kubenswrapper[7146]: I0318 13:12:46.328584 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:47.328364 master-0 kubenswrapper[7146]: I0318 13:12:47.328282 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:47.328364 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:47.328364 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:47.328364 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:47.328364 master-0 kubenswrapper[7146]: I0318 13:12:47.328351 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:48.328913 master-0 kubenswrapper[7146]: I0318 13:12:48.328670 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:48.328913 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:48.328913 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:48.328913 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:48.328913 master-0 kubenswrapper[7146]: I0318 13:12:48.328743 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:49.327869 master-0 kubenswrapper[7146]: I0318 13:12:49.327754 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:49.327869 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:49.327869 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:49.327869 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:49.328350 master-0 kubenswrapper[7146]: I0318 13:12:49.327888 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:49.927353 master-0 kubenswrapper[7146]: I0318 13:12:49.927275 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/1.log" Mar 18 13:12:49.928339 master-0 kubenswrapper[7146]: I0318 13:12:49.928173 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/0.log" Mar 18 13:12:49.928339 master-0 kubenswrapper[7146]: I0318 13:12:49.928224 7146 generic.go:334] "Generic (PLEG): container finished" podID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" containerID="737b35288b477956960fa12cc79eb83b193b7b471646ce5af1a3aaef15a0e026" exitCode=1 Mar 18 13:12:49.928339 master-0 kubenswrapper[7146]: I0318 13:12:49.928258 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" event={"ID":"f2b92a53-0b61-4e1d-a306-f9a498e48b38","Type":"ContainerDied","Data":"737b35288b477956960fa12cc79eb83b193b7b471646ce5af1a3aaef15a0e026"} Mar 18 13:12:49.928339 master-0 kubenswrapper[7146]: I0318 13:12:49.928304 7146 scope.go:117] "RemoveContainer" containerID="bd4c65659cdaf88672c351e368deda39b10476e44f4e0b79ea5e5dab975cb22c" Mar 18 13:12:49.929390 master-0 kubenswrapper[7146]: I0318 13:12:49.929327 7146 scope.go:117] "RemoveContainer" containerID="737b35288b477956960fa12cc79eb83b193b7b471646ce5af1a3aaef15a0e026" Mar 18 13:12:49.932410 master-0 kubenswrapper[7146]: E0318 13:12:49.929826 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-xwqsb_openshift-ingress-operator(f2b92a53-0b61-4e1d-a306-f9a498e48b38)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" podUID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" Mar 18 13:12:50.328768 master-0 kubenswrapper[7146]: I0318 13:12:50.328692 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:50.328768 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:50.328768 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:50.328768 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:50.329256 master-0 kubenswrapper[7146]: I0318 13:12:50.328774 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:50.937188 master-0 kubenswrapper[7146]: I0318 13:12:50.936857 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/1.log" Mar 18 13:12:51.328505 master-0 kubenswrapper[7146]: I0318 13:12:51.328441 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:51.328505 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:51.328505 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:51.328505 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:51.329213 master-0 kubenswrapper[7146]: I0318 13:12:51.329115 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:52.328374 master-0 kubenswrapper[7146]: I0318 13:12:52.328306 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:52.328374 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:52.328374 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:52.328374 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:52.329223 master-0 kubenswrapper[7146]: I0318 13:12:52.328385 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:53.328491 master-0 kubenswrapper[7146]: I0318 13:12:53.328313 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:53.328491 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:53.328491 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:53.328491 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:53.329229 master-0 kubenswrapper[7146]: I0318 13:12:53.328529 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:54.328262 master-0 kubenswrapper[7146]: I0318 13:12:54.328206 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:54.328262 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:54.328262 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:54.328262 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:54.329311 master-0 kubenswrapper[7146]: I0318 13:12:54.328273 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:55.328576 master-0 kubenswrapper[7146]: I0318 13:12:55.328521 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:55.328576 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:55.328576 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:55.328576 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:55.329308 master-0 kubenswrapper[7146]: I0318 13:12:55.328593 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:56.328815 master-0 kubenswrapper[7146]: I0318 13:12:56.328731 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:56.328815 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:56.328815 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:56.328815 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:56.328815 master-0 kubenswrapper[7146]: I0318 13:12:56.328820 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:57.329064 master-0 kubenswrapper[7146]: I0318 13:12:57.328915 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:57.329064 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:57.329064 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:57.329064 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:57.329064 master-0 kubenswrapper[7146]: I0318 13:12:57.329066 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:58.328921 master-0 kubenswrapper[7146]: I0318 13:12:58.328823 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:58.328921 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:58.328921 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:58.328921 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:58.329547 master-0 kubenswrapper[7146]: I0318 13:12:58.328930 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:59.327223 master-0 kubenswrapper[7146]: I0318 13:12:59.327152 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:59.327223 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:12:59.327223 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:12:59.327223 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:12:59.327584 master-0 kubenswrapper[7146]: I0318 13:12:59.327251 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:00.327418 master-0 kubenswrapper[7146]: I0318 13:13:00.327342 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:00.327418 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:00.327418 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:00.327418 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:00.328083 master-0 kubenswrapper[7146]: I0318 13:13:00.327433 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:01.328129 master-0 kubenswrapper[7146]: I0318 13:13:01.327781 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:01.328129 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:01.328129 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:01.328129 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:01.328888 master-0 kubenswrapper[7146]: I0318 13:13:01.328155 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:02.328871 master-0 kubenswrapper[7146]: I0318 13:13:02.328753 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:02.328871 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:02.328871 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:02.328871 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:02.328871 master-0 kubenswrapper[7146]: I0318 13:13:02.328844 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:03.348290 master-0 kubenswrapper[7146]: I0318 13:13:03.348068 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:03.348290 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:03.348290 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:03.348290 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:03.348290 master-0 kubenswrapper[7146]: I0318 13:13:03.348214 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:03.358962 master-0 kubenswrapper[7146]: I0318 13:13:03.358885 7146 scope.go:117] "RemoveContainer" containerID="737b35288b477956960fa12cc79eb83b193b7b471646ce5af1a3aaef15a0e026" Mar 18 13:13:04.022007 master-0 kubenswrapper[7146]: I0318 13:13:04.021870 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/1.log" Mar 18 13:13:04.022215 master-0 kubenswrapper[7146]: I0318 13:13:04.022185 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" event={"ID":"f2b92a53-0b61-4e1d-a306-f9a498e48b38","Type":"ContainerStarted","Data":"7c793255a5608311d981bf6038801d212aa1f98f8a9233aeb3861db6e4fc95b7"} Mar 18 13:13:04.327056 master-0 kubenswrapper[7146]: I0318 13:13:04.326917 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:04.327056 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:04.327056 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:04.327056 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:04.327056 master-0 kubenswrapper[7146]: I0318 13:13:04.326994 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:05.329479 master-0 kubenswrapper[7146]: I0318 13:13:05.329218 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:05.329479 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:05.329479 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:05.329479 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:05.329479 master-0 kubenswrapper[7146]: I0318 13:13:05.329337 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:06.327779 master-0 kubenswrapper[7146]: I0318 13:13:06.327712 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:06.327779 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:06.327779 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:06.327779 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:06.328156 master-0 kubenswrapper[7146]: I0318 13:13:06.327800 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:07.328683 master-0 kubenswrapper[7146]: I0318 13:13:07.328600 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:07.328683 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:07.328683 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:07.328683 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:07.329792 master-0 kubenswrapper[7146]: I0318 13:13:07.328695 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:08.328140 master-0 kubenswrapper[7146]: I0318 13:13:08.328056 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:08.328140 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:08.328140 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:08.328140 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:08.328140 master-0 kubenswrapper[7146]: I0318 13:13:08.328119 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:09.328250 master-0 kubenswrapper[7146]: I0318 13:13:09.328180 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:09.328250 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:09.328250 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:09.328250 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:09.328250 master-0 kubenswrapper[7146]: I0318 13:13:09.328246 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:10.328267 master-0 kubenswrapper[7146]: I0318 13:13:10.328183 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:10.328267 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:10.328267 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:10.328267 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:10.328267 master-0 kubenswrapper[7146]: I0318 13:13:10.328249 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:11.327537 master-0 kubenswrapper[7146]: I0318 13:13:11.327491 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:11.327537 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:11.327537 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:11.327537 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:11.327885 master-0 kubenswrapper[7146]: I0318 13:13:11.327573 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:11.554672 master-0 kubenswrapper[7146]: I0318 13:13:11.554615 7146 scope.go:117] "RemoveContainer" containerID="e5d871ce15c246b83610b31f823caa6e0c2380ca2682febc8546add0e167eb72" Mar 18 13:13:12.328931 master-0 kubenswrapper[7146]: I0318 13:13:12.328806 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:12.328931 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:12.328931 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:12.328931 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:12.329433 master-0 kubenswrapper[7146]: I0318 13:13:12.328926 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:13.327959 master-0 kubenswrapper[7146]: I0318 13:13:13.327837 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:13.327959 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:13.327959 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:13.327959 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:13.328676 master-0 kubenswrapper[7146]: I0318 13:13:13.327997 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:14.329968 master-0 kubenswrapper[7146]: I0318 13:13:14.329828 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:14.329968 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:14.329968 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:14.329968 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:14.330839 master-0 kubenswrapper[7146]: I0318 13:13:14.329986 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:15.327619 master-0 kubenswrapper[7146]: I0318 13:13:15.327504 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:15.327619 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:15.327619 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:15.327619 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:15.328538 master-0 kubenswrapper[7146]: I0318 13:13:15.327655 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:16.328803 master-0 kubenswrapper[7146]: I0318 13:13:16.328720 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:16.328803 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:16.328803 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:16.328803 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:16.328803 master-0 kubenswrapper[7146]: I0318 13:13:16.328799 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:17.328925 master-0 kubenswrapper[7146]: I0318 13:13:17.328825 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:17.328925 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:17.328925 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:17.328925 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:17.329773 master-0 kubenswrapper[7146]: I0318 13:13:17.328966 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:18.327331 master-0 kubenswrapper[7146]: I0318 13:13:18.327246 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:18.327331 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:18.327331 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:18.327331 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:18.327331 master-0 kubenswrapper[7146]: I0318 13:13:18.327310 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:19.328406 master-0 kubenswrapper[7146]: I0318 13:13:19.328308 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:19.328406 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:19.328406 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:19.328406 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:19.329194 master-0 kubenswrapper[7146]: I0318 13:13:19.328432 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:20.328053 master-0 kubenswrapper[7146]: I0318 13:13:20.327924 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:20.328053 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:20.328053 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:20.328053 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:20.328400 master-0 kubenswrapper[7146]: I0318 13:13:20.328061 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:21.327738 master-0 kubenswrapper[7146]: I0318 13:13:21.327672 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:21.327738 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:21.327738 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:21.327738 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:21.328552 master-0 kubenswrapper[7146]: I0318 13:13:21.327751 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:22.327921 master-0 kubenswrapper[7146]: I0318 13:13:22.327840 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:22.327921 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:22.327921 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:22.327921 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:22.327921 master-0 kubenswrapper[7146]: I0318 13:13:22.327901 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:23.328538 master-0 kubenswrapper[7146]: I0318 13:13:23.328356 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:23.328538 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:23.328538 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:23.328538 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:23.328538 master-0 kubenswrapper[7146]: I0318 13:13:23.328466 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:24.327007 master-0 kubenswrapper[7146]: I0318 13:13:24.326958 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:24.327007 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:24.327007 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:24.327007 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:24.327306 master-0 kubenswrapper[7146]: I0318 13:13:24.327028 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:25.328236 master-0 kubenswrapper[7146]: I0318 13:13:25.328153 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:25.328236 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:25.328236 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:25.328236 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:25.328236 master-0 kubenswrapper[7146]: I0318 13:13:25.328229 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:26.328149 master-0 kubenswrapper[7146]: I0318 13:13:26.328072 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:26.328149 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:26.328149 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:26.328149 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:26.328762 master-0 kubenswrapper[7146]: I0318 13:13:26.328153 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:27.328713 master-0 kubenswrapper[7146]: I0318 13:13:27.328633 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:27.328713 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:27.328713 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:27.328713 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:27.329403 master-0 kubenswrapper[7146]: I0318 13:13:27.328717 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:28.329054 master-0 kubenswrapper[7146]: I0318 13:13:28.328949 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:28.329054 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:28.329054 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:28.329054 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:28.329054 master-0 kubenswrapper[7146]: I0318 13:13:28.329045 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:29.327915 master-0 kubenswrapper[7146]: I0318 13:13:29.327844 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:29.327915 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:29.327915 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:29.327915 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:29.328280 master-0 kubenswrapper[7146]: I0318 13:13:29.327930 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:30.328554 master-0 kubenswrapper[7146]: I0318 13:13:30.328482 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:30.328554 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:30.328554 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:30.328554 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:30.328554 master-0 kubenswrapper[7146]: I0318 13:13:30.328554 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:31.329038 master-0 kubenswrapper[7146]: I0318 13:13:31.328929 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:31.329038 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:31.329038 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:31.329038 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:31.329706 master-0 kubenswrapper[7146]: I0318 13:13:31.329079 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:32.327441 master-0 kubenswrapper[7146]: I0318 13:13:32.327353 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:32.327441 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:32.327441 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:32.327441 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:32.327869 master-0 kubenswrapper[7146]: I0318 13:13:32.327477 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:33.329231 master-0 kubenswrapper[7146]: I0318 13:13:33.329133 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:33.329231 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:33.329231 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:33.329231 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:33.330005 master-0 kubenswrapper[7146]: I0318 13:13:33.329263 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:34.329223 master-0 kubenswrapper[7146]: I0318 13:13:34.329148 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:34.329223 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:34.329223 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:34.329223 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:34.329980 master-0 kubenswrapper[7146]: I0318 13:13:34.329253 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:35.328570 master-0 kubenswrapper[7146]: I0318 13:13:35.328481 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:35.328570 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:35.328570 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:35.328570 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:35.328892 master-0 kubenswrapper[7146]: I0318 13:13:35.328606 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:36.327618 master-0 kubenswrapper[7146]: I0318 13:13:36.327550 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:36.327618 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:36.327618 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:36.327618 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:36.328425 master-0 kubenswrapper[7146]: I0318 13:13:36.327636 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:37.329185 master-0 kubenswrapper[7146]: I0318 13:13:37.329056 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:37.329185 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:37.329185 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:37.329185 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:37.329754 master-0 kubenswrapper[7146]: I0318 13:13:37.329230 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:38.327695 master-0 kubenswrapper[7146]: I0318 13:13:38.327591 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:38.327695 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:38.327695 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:38.327695 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:38.327695 master-0 kubenswrapper[7146]: I0318 13:13:38.327677 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:39.328343 master-0 kubenswrapper[7146]: I0318 13:13:39.328239 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:39.328343 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:39.328343 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:39.328343 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:39.329009 master-0 kubenswrapper[7146]: I0318 13:13:39.328366 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:40.326670 master-0 kubenswrapper[7146]: I0318 13:13:40.326611 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:40.326670 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:40.326670 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:40.326670 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:40.326670 master-0 kubenswrapper[7146]: I0318 13:13:40.326668 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:41.328539 master-0 kubenswrapper[7146]: I0318 13:13:41.328443 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:41.328539 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:41.328539 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:41.328539 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:41.329276 master-0 kubenswrapper[7146]: I0318 13:13:41.328551 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:42.327848 master-0 kubenswrapper[7146]: I0318 13:13:42.327798 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:42.327848 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:42.327848 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:42.327848 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:42.328189 master-0 kubenswrapper[7146]: I0318 13:13:42.327872 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:43.328883 master-0 kubenswrapper[7146]: I0318 13:13:43.328784 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:43.328883 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:43.328883 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:43.328883 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:43.330050 master-0 kubenswrapper[7146]: I0318 13:13:43.328891 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:44.328158 master-0 kubenswrapper[7146]: I0318 13:13:44.328081 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:44.328158 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:44.328158 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:44.328158 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:44.328577 master-0 kubenswrapper[7146]: I0318 13:13:44.328169 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:45.328394 master-0 kubenswrapper[7146]: I0318 13:13:45.328260 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:45.328394 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:45.328394 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:45.328394 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:45.328394 master-0 kubenswrapper[7146]: I0318 13:13:45.328356 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:46.331487 master-0 kubenswrapper[7146]: I0318 13:13:46.331405 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:46.331487 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:46.331487 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:46.331487 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:46.332219 master-0 kubenswrapper[7146]: I0318 13:13:46.331507 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:47.328199 master-0 kubenswrapper[7146]: I0318 13:13:47.328118 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:47.328199 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:13:47.328199 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:13:47.328199 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:13:47.328199 master-0 kubenswrapper[7146]: I0318 13:13:47.328177 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:47.328812 master-0 kubenswrapper[7146]: I0318 13:13:47.328227 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:13:47.328812 master-0 kubenswrapper[7146]: I0318 13:13:47.328694 7146 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"e11b0d0a2f2fcef8559280a2714debec3210ea7873ccaa447460e5bbe4ca1669"} pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" containerMessage="Container router failed startup probe, will be restarted" Mar 18 13:13:47.328812 master-0 kubenswrapper[7146]: I0318 13:13:47.328727 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" containerID="cri-o://e11b0d0a2f2fcef8559280a2714debec3210ea7873ccaa447460e5bbe4ca1669" gracePeriod=3600 Mar 18 13:14:33.606259 master-0 kubenswrapper[7146]: I0318 13:14:33.606182 7146 generic.go:334] "Generic (PLEG): container finished" podID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerID="e11b0d0a2f2fcef8559280a2714debec3210ea7873ccaa447460e5bbe4ca1669" exitCode=0 Mar 18 13:14:33.606875 master-0 kubenswrapper[7146]: I0318 13:14:33.606266 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" event={"ID":"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf","Type":"ContainerDied","Data":"e11b0d0a2f2fcef8559280a2714debec3210ea7873ccaa447460e5bbe4ca1669"} Mar 18 13:14:34.613166 master-0 kubenswrapper[7146]: I0318 13:14:34.613094 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" event={"ID":"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf","Type":"ContainerStarted","Data":"31665db688945aa094b6891895ce672425d61018bce7b516675fdc844fb9eb7e"} Mar 18 13:14:35.325631 master-0 kubenswrapper[7146]: I0318 13:14:35.325553 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:14:35.329468 master-0 kubenswrapper[7146]: I0318 13:14:35.329390 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:35.329468 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:35.329468 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:35.329468 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:35.329828 master-0 kubenswrapper[7146]: I0318 13:14:35.329495 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:36.327190 master-0 kubenswrapper[7146]: I0318 13:14:36.327116 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:36.327190 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:36.327190 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:36.327190 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:36.327190 master-0 kubenswrapper[7146]: I0318 13:14:36.327184 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:37.328244 master-0 kubenswrapper[7146]: I0318 13:14:37.328197 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:37.328244 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:37.328244 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:37.328244 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:37.328909 master-0 kubenswrapper[7146]: I0318 13:14:37.328286 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:38.328368 master-0 kubenswrapper[7146]: I0318 13:14:38.328023 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:38.328368 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:38.328368 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:38.328368 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:38.328368 master-0 kubenswrapper[7146]: I0318 13:14:38.328300 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:39.327868 master-0 kubenswrapper[7146]: I0318 13:14:39.327758 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:39.327868 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:39.327868 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:39.327868 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:39.327868 master-0 kubenswrapper[7146]: I0318 13:14:39.327845 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:40.327211 master-0 kubenswrapper[7146]: I0318 13:14:40.327109 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:40.327211 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:40.327211 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:40.327211 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:40.327211 master-0 kubenswrapper[7146]: I0318 13:14:40.327177 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:41.328341 master-0 kubenswrapper[7146]: I0318 13:14:41.328227 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:41.328341 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:41.328341 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:41.328341 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:41.329094 master-0 kubenswrapper[7146]: I0318 13:14:41.328363 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:42.328677 master-0 kubenswrapper[7146]: I0318 13:14:42.328537 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:42.328677 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:42.328677 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:42.328677 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:42.329729 master-0 kubenswrapper[7146]: I0318 13:14:42.328691 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:43.328376 master-0 kubenswrapper[7146]: I0318 13:14:43.328273 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:43.328376 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:43.328376 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:43.328376 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:43.329446 master-0 kubenswrapper[7146]: I0318 13:14:43.328387 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:44.326100 master-0 kubenswrapper[7146]: I0318 13:14:44.325871 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:14:44.328733 master-0 kubenswrapper[7146]: I0318 13:14:44.328670 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:44.328733 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:44.328733 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:44.328733 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:44.329292 master-0 kubenswrapper[7146]: I0318 13:14:44.328761 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:45.328735 master-0 kubenswrapper[7146]: I0318 13:14:45.328631 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:45.328735 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:45.328735 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:45.328735 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:45.329449 master-0 kubenswrapper[7146]: I0318 13:14:45.328766 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:46.327404 master-0 kubenswrapper[7146]: I0318 13:14:46.327298 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:46.327404 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:46.327404 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:46.327404 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:46.327404 master-0 kubenswrapper[7146]: I0318 13:14:46.327372 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:47.328366 master-0 kubenswrapper[7146]: I0318 13:14:47.328301 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:47.328366 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:47.328366 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:47.328366 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:47.328991 master-0 kubenswrapper[7146]: I0318 13:14:47.328376 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:48.329299 master-0 kubenswrapper[7146]: I0318 13:14:48.329156 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:48.329299 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:48.329299 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:48.329299 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:48.330320 master-0 kubenswrapper[7146]: I0318 13:14:48.329298 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:49.327913 master-0 kubenswrapper[7146]: I0318 13:14:49.327848 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:49.327913 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:49.327913 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:49.327913 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:49.327913 master-0 kubenswrapper[7146]: I0318 13:14:49.327909 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:50.327633 master-0 kubenswrapper[7146]: I0318 13:14:50.327576 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:50.327633 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:50.327633 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:50.327633 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:50.328203 master-0 kubenswrapper[7146]: I0318 13:14:50.327636 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:51.327611 master-0 kubenswrapper[7146]: I0318 13:14:51.327537 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:51.327611 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:51.327611 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:51.327611 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:51.328523 master-0 kubenswrapper[7146]: I0318 13:14:51.328374 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:52.327192 master-0 kubenswrapper[7146]: I0318 13:14:52.327128 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:52.327192 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:52.327192 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:52.327192 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:52.327782 master-0 kubenswrapper[7146]: I0318 13:14:52.327190 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:53.329340 master-0 kubenswrapper[7146]: I0318 13:14:53.329261 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:53.329340 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:53.329340 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:53.329340 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:53.330179 master-0 kubenswrapper[7146]: I0318 13:14:53.329363 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:54.327497 master-0 kubenswrapper[7146]: I0318 13:14:54.327458 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:54.327497 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:54.327497 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:54.327497 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:54.327858 master-0 kubenswrapper[7146]: I0318 13:14:54.327512 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:55.327813 master-0 kubenswrapper[7146]: I0318 13:14:55.327773 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:55.327813 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:55.327813 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:55.327813 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:55.328472 master-0 kubenswrapper[7146]: I0318 13:14:55.328436 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:56.327038 master-0 kubenswrapper[7146]: I0318 13:14:56.326983 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:56.327038 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:56.327038 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:56.327038 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:56.327289 master-0 kubenswrapper[7146]: I0318 13:14:56.327056 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:57.329022 master-0 kubenswrapper[7146]: I0318 13:14:57.328954 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:57.329022 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:57.329022 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:57.329022 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:57.329646 master-0 kubenswrapper[7146]: I0318 13:14:57.329036 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:58.328003 master-0 kubenswrapper[7146]: I0318 13:14:58.327889 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:58.328003 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:58.328003 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:58.328003 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:58.328720 master-0 kubenswrapper[7146]: I0318 13:14:58.328022 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:59.327698 master-0 kubenswrapper[7146]: I0318 13:14:59.327635 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:59.327698 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:14:59.327698 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:14:59.327698 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:14:59.328451 master-0 kubenswrapper[7146]: I0318 13:14:59.328416 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:00.327598 master-0 kubenswrapper[7146]: I0318 13:15:00.327553 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:00.327598 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:00.327598 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:00.327598 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:00.328482 master-0 kubenswrapper[7146]: I0318 13:15:00.328448 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:01.327965 master-0 kubenswrapper[7146]: I0318 13:15:01.327835 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:01.327965 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:01.327965 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:01.327965 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:01.327965 master-0 kubenswrapper[7146]: I0318 13:15:01.327952 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:02.328245 master-0 kubenswrapper[7146]: I0318 13:15:02.328087 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:02.328245 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:02.328245 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:02.328245 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:02.329156 master-0 kubenswrapper[7146]: I0318 13:15:02.328361 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:03.327645 master-0 kubenswrapper[7146]: I0318 13:15:03.327582 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:03.327645 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:03.327645 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:03.327645 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:03.327645 master-0 kubenswrapper[7146]: I0318 13:15:03.327645 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:04.327052 master-0 kubenswrapper[7146]: I0318 13:15:04.326997 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:04.327052 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:04.327052 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:04.327052 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:04.327622 master-0 kubenswrapper[7146]: I0318 13:15:04.327053 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:04.782099 master-0 kubenswrapper[7146]: I0318 13:15:04.782065 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/2.log" Mar 18 13:15:04.782899 master-0 kubenswrapper[7146]: I0318 13:15:04.782854 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/1.log" Mar 18 13:15:04.783305 master-0 kubenswrapper[7146]: I0318 13:15:04.783276 7146 generic.go:334] "Generic (PLEG): container finished" podID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" containerID="7c793255a5608311d981bf6038801d212aa1f98f8a9233aeb3861db6e4fc95b7" exitCode=1 Mar 18 13:15:04.783386 master-0 kubenswrapper[7146]: I0318 13:15:04.783312 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" event={"ID":"f2b92a53-0b61-4e1d-a306-f9a498e48b38","Type":"ContainerDied","Data":"7c793255a5608311d981bf6038801d212aa1f98f8a9233aeb3861db6e4fc95b7"} Mar 18 13:15:04.783386 master-0 kubenswrapper[7146]: I0318 13:15:04.783374 7146 scope.go:117] "RemoveContainer" containerID="737b35288b477956960fa12cc79eb83b193b7b471646ce5af1a3aaef15a0e026" Mar 18 13:15:04.786202 master-0 kubenswrapper[7146]: I0318 13:15:04.786172 7146 scope.go:117] "RemoveContainer" containerID="7c793255a5608311d981bf6038801d212aa1f98f8a9233aeb3861db6e4fc95b7" Mar 18 13:15:04.786613 master-0 kubenswrapper[7146]: E0318 13:15:04.786590 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-xwqsb_openshift-ingress-operator(f2b92a53-0b61-4e1d-a306-f9a498e48b38)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" podUID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" Mar 18 13:15:05.326745 master-0 kubenswrapper[7146]: I0318 13:15:05.326695 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:05.326745 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:05.326745 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:05.326745 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:05.327073 master-0 kubenswrapper[7146]: I0318 13:15:05.326751 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:05.790808 master-0 kubenswrapper[7146]: I0318 13:15:05.790743 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/2.log" Mar 18 13:15:06.327872 master-0 kubenswrapper[7146]: I0318 13:15:06.327825 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:06.327872 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:06.327872 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:06.327872 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:06.328175 master-0 kubenswrapper[7146]: I0318 13:15:06.327890 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:07.326695 master-0 kubenswrapper[7146]: I0318 13:15:07.326641 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:07.326695 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:07.326695 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:07.326695 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:07.327214 master-0 kubenswrapper[7146]: I0318 13:15:07.326705 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:08.327295 master-0 kubenswrapper[7146]: I0318 13:15:08.327244 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:08.327295 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:08.327295 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:08.327295 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:08.327848 master-0 kubenswrapper[7146]: I0318 13:15:08.327310 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:09.327420 master-0 kubenswrapper[7146]: I0318 13:15:09.327337 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:09.327420 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:09.327420 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:09.327420 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:09.327420 master-0 kubenswrapper[7146]: I0318 13:15:09.327415 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:10.328535 master-0 kubenswrapper[7146]: I0318 13:15:10.328462 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:10.328535 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:10.328535 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:10.328535 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:10.329541 master-0 kubenswrapper[7146]: I0318 13:15:10.328542 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:11.328078 master-0 kubenswrapper[7146]: I0318 13:15:11.327975 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:11.328078 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:11.328078 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:11.328078 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:11.328078 master-0 kubenswrapper[7146]: I0318 13:15:11.328051 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:12.327894 master-0 kubenswrapper[7146]: I0318 13:15:12.327810 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:12.327894 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:12.327894 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:12.327894 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:12.329251 master-0 kubenswrapper[7146]: I0318 13:15:12.327953 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:13.329636 master-0 kubenswrapper[7146]: I0318 13:15:13.329542 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:13.329636 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:13.329636 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:13.329636 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:13.330359 master-0 kubenswrapper[7146]: I0318 13:15:13.329645 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:14.329221 master-0 kubenswrapper[7146]: I0318 13:15:14.329167 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:14.329221 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:14.329221 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:14.329221 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:14.329221 master-0 kubenswrapper[7146]: I0318 13:15:14.329239 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:15.329004 master-0 kubenswrapper[7146]: I0318 13:15:15.328866 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:15.329004 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:15.329004 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:15.329004 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:15.330296 master-0 kubenswrapper[7146]: I0318 13:15:15.329007 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:16.328050 master-0 kubenswrapper[7146]: I0318 13:15:16.327988 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:16.328050 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:16.328050 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:16.328050 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:16.328407 master-0 kubenswrapper[7146]: I0318 13:15:16.328075 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:17.328003 master-0 kubenswrapper[7146]: I0318 13:15:17.327922 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:17.328003 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:17.328003 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:17.328003 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:17.328779 master-0 kubenswrapper[7146]: I0318 13:15:17.328745 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:18.328078 master-0 kubenswrapper[7146]: I0318 13:15:18.328004 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:18.328078 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:18.328078 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:18.328078 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:18.328709 master-0 kubenswrapper[7146]: I0318 13:15:18.328081 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:19.328060 master-0 kubenswrapper[7146]: I0318 13:15:19.327983 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:19.328060 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:19.328060 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:19.328060 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:19.329106 master-0 kubenswrapper[7146]: I0318 13:15:19.328093 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:20.326877 master-0 kubenswrapper[7146]: I0318 13:15:20.326819 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:20.326877 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:20.326877 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:20.326877 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:20.327165 master-0 kubenswrapper[7146]: I0318 13:15:20.326885 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:20.358035 master-0 kubenswrapper[7146]: I0318 13:15:20.357977 7146 scope.go:117] "RemoveContainer" containerID="7c793255a5608311d981bf6038801d212aa1f98f8a9233aeb3861db6e4fc95b7" Mar 18 13:15:20.358530 master-0 kubenswrapper[7146]: E0318 13:15:20.358285 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-xwqsb_openshift-ingress-operator(f2b92a53-0b61-4e1d-a306-f9a498e48b38)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" podUID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" Mar 18 13:15:21.329054 master-0 kubenswrapper[7146]: I0318 13:15:21.328871 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:21.329054 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:21.329054 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:21.329054 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:21.329054 master-0 kubenswrapper[7146]: I0318 13:15:21.329004 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:22.327725 master-0 kubenswrapper[7146]: I0318 13:15:22.327670 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:22.327725 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:22.327725 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:22.327725 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:22.328351 master-0 kubenswrapper[7146]: I0318 13:15:22.327746 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:23.328428 master-0 kubenswrapper[7146]: I0318 13:15:23.328232 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:23.328428 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:23.328428 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:23.328428 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:23.328428 master-0 kubenswrapper[7146]: I0318 13:15:23.328344 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:24.331163 master-0 kubenswrapper[7146]: I0318 13:15:24.331085 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:24.331163 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:24.331163 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:24.331163 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:24.332154 master-0 kubenswrapper[7146]: I0318 13:15:24.331181 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:25.328265 master-0 kubenswrapper[7146]: I0318 13:15:25.328203 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:25.328265 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:25.328265 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:25.328265 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:25.328630 master-0 kubenswrapper[7146]: I0318 13:15:25.328297 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:26.331952 master-0 kubenswrapper[7146]: I0318 13:15:26.331865 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:26.331952 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:26.331952 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:26.331952 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:26.332528 master-0 kubenswrapper[7146]: I0318 13:15:26.331966 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:27.328311 master-0 kubenswrapper[7146]: I0318 13:15:27.328234 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:27.328311 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:27.328311 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:27.328311 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:27.328723 master-0 kubenswrapper[7146]: I0318 13:15:27.328335 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:28.326871 master-0 kubenswrapper[7146]: I0318 13:15:28.326825 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:28.326871 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:28.326871 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:28.326871 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:28.327704 master-0 kubenswrapper[7146]: I0318 13:15:28.327669 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:29.327795 master-0 kubenswrapper[7146]: I0318 13:15:29.327721 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:29.327795 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:29.327795 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:29.327795 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:29.328452 master-0 kubenswrapper[7146]: I0318 13:15:29.327814 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:30.328605 master-0 kubenswrapper[7146]: I0318 13:15:30.328546 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:30.328605 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:30.328605 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:30.328605 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:30.329322 master-0 kubenswrapper[7146]: I0318 13:15:30.328618 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:31.327314 master-0 kubenswrapper[7146]: I0318 13:15:31.327233 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:31.327314 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:31.327314 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:31.327314 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:31.327748 master-0 kubenswrapper[7146]: I0318 13:15:31.327322 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:32.328568 master-0 kubenswrapper[7146]: I0318 13:15:32.328408 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:32.328568 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:32.328568 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:32.328568 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:32.328568 master-0 kubenswrapper[7146]: I0318 13:15:32.328591 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:33.327486 master-0 kubenswrapper[7146]: I0318 13:15:33.327389 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:33.327486 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:33.327486 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:33.327486 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:33.327486 master-0 kubenswrapper[7146]: I0318 13:15:33.327463 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:34.328652 master-0 kubenswrapper[7146]: I0318 13:15:34.328570 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:34.328652 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:34.328652 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:34.328652 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:34.329549 master-0 kubenswrapper[7146]: I0318 13:15:34.328658 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:35.327322 master-0 kubenswrapper[7146]: I0318 13:15:35.327253 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:35.327322 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:35.327322 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:35.327322 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:35.327571 master-0 kubenswrapper[7146]: I0318 13:15:35.327349 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:35.358234 master-0 kubenswrapper[7146]: I0318 13:15:35.357847 7146 scope.go:117] "RemoveContainer" containerID="7c793255a5608311d981bf6038801d212aa1f98f8a9233aeb3861db6e4fc95b7" Mar 18 13:15:35.982335 master-0 kubenswrapper[7146]: I0318 13:15:35.982297 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/2.log" Mar 18 13:15:35.983044 master-0 kubenswrapper[7146]: I0318 13:15:35.983002 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" event={"ID":"f2b92a53-0b61-4e1d-a306-f9a498e48b38","Type":"ContainerStarted","Data":"c1cbee78b9223d65a91dcbbb50864bbde5c7ce89aa7a2abeca708031563e11b9"} Mar 18 13:15:36.329795 master-0 kubenswrapper[7146]: I0318 13:15:36.329593 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:36.329795 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:36.329795 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:36.329795 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:36.329795 master-0 kubenswrapper[7146]: I0318 13:15:36.329712 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:37.327281 master-0 kubenswrapper[7146]: I0318 13:15:37.327162 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:37.327281 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:37.327281 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:37.327281 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:37.327281 master-0 kubenswrapper[7146]: I0318 13:15:37.327272 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:38.329375 master-0 kubenswrapper[7146]: I0318 13:15:38.329305 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:38.329375 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:38.329375 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:38.329375 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:38.330036 master-0 kubenswrapper[7146]: I0318 13:15:38.329388 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:39.328565 master-0 kubenswrapper[7146]: I0318 13:15:39.328470 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:39.328565 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:39.328565 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:39.328565 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:39.328565 master-0 kubenswrapper[7146]: I0318 13:15:39.328560 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:40.329283 master-0 kubenswrapper[7146]: I0318 13:15:40.329128 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:40.329283 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:40.329283 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:40.329283 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:40.329283 master-0 kubenswrapper[7146]: I0318 13:15:40.329201 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:41.327958 master-0 kubenswrapper[7146]: I0318 13:15:41.327879 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:41.327958 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:41.327958 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:41.327958 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:41.328230 master-0 kubenswrapper[7146]: I0318 13:15:41.327970 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:42.328005 master-0 kubenswrapper[7146]: I0318 13:15:42.327914 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:42.328005 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:42.328005 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:42.328005 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:42.328771 master-0 kubenswrapper[7146]: I0318 13:15:42.328047 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:43.329107 master-0 kubenswrapper[7146]: I0318 13:15:43.329025 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:43.329107 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:43.329107 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:43.329107 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:43.329678 master-0 kubenswrapper[7146]: I0318 13:15:43.329141 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:44.333379 master-0 kubenswrapper[7146]: I0318 13:15:44.333222 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:44.333379 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:44.333379 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:44.333379 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:44.334718 master-0 kubenswrapper[7146]: I0318 13:15:44.333405 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:45.328175 master-0 kubenswrapper[7146]: I0318 13:15:45.328113 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:45.328175 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:45.328175 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:45.328175 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:45.328595 master-0 kubenswrapper[7146]: I0318 13:15:45.328215 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:46.328104 master-0 kubenswrapper[7146]: I0318 13:15:46.327961 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:46.328104 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:46.328104 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:46.328104 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:46.328104 master-0 kubenswrapper[7146]: I0318 13:15:46.328027 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:47.327749 master-0 kubenswrapper[7146]: I0318 13:15:47.327646 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:47.327749 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:47.327749 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:47.327749 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:47.328117 master-0 kubenswrapper[7146]: I0318 13:15:47.327736 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:48.327385 master-0 kubenswrapper[7146]: I0318 13:15:48.327346 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:48.327385 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:48.327385 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:48.327385 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:48.329980 master-0 kubenswrapper[7146]: I0318 13:15:48.327901 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:49.327395 master-0 kubenswrapper[7146]: I0318 13:15:49.327332 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:49.327395 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:49.327395 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:49.327395 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:49.327981 master-0 kubenswrapper[7146]: I0318 13:15:49.327417 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:50.328555 master-0 kubenswrapper[7146]: I0318 13:15:50.328477 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:50.328555 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:50.328555 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:50.328555 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:50.329132 master-0 kubenswrapper[7146]: I0318 13:15:50.328555 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:51.328178 master-0 kubenswrapper[7146]: I0318 13:15:51.328097 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:51.328178 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:51.328178 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:51.328178 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:51.328472 master-0 kubenswrapper[7146]: I0318 13:15:51.328187 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:52.328332 master-0 kubenswrapper[7146]: I0318 13:15:52.328235 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:52.328332 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:52.328332 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:52.328332 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:52.329694 master-0 kubenswrapper[7146]: I0318 13:15:52.328361 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:53.328574 master-0 kubenswrapper[7146]: I0318 13:15:53.328493 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:53.328574 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:53.328574 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:53.328574 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:53.329726 master-0 kubenswrapper[7146]: I0318 13:15:53.329678 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:53.742539 master-0 kubenswrapper[7146]: I0318 13:15:53.742461 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 13:15:53.743689 master-0 kubenswrapper[7146]: I0318 13:15:53.743624 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 13:15:53.747213 master-0 kubenswrapper[7146]: I0318 13:15:53.746818 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-bcdsk" Mar 18 13:15:53.747213 master-0 kubenswrapper[7146]: I0318 13:15:53.747006 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 18 13:15:53.759886 master-0 kubenswrapper[7146]: I0318 13:15:53.759828 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 13:15:53.805665 master-0 kubenswrapper[7146]: I0318 13:15:53.805583 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5879ced8-4ac1-40e3-bf93-38b8a7497823-var-lock\") pod \"installer-2-master-0\" (UID: \"5879ced8-4ac1-40e3-bf93-38b8a7497823\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:15:53.805982 master-0 kubenswrapper[7146]: I0318 13:15:53.805691 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5879ced8-4ac1-40e3-bf93-38b8a7497823-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5879ced8-4ac1-40e3-bf93-38b8a7497823\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:15:53.805982 master-0 kubenswrapper[7146]: I0318 13:15:53.805748 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5879ced8-4ac1-40e3-bf93-38b8a7497823-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5879ced8-4ac1-40e3-bf93-38b8a7497823\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:15:53.907519 master-0 kubenswrapper[7146]: I0318 13:15:53.907454 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5879ced8-4ac1-40e3-bf93-38b8a7497823-var-lock\") pod \"installer-2-master-0\" (UID: \"5879ced8-4ac1-40e3-bf93-38b8a7497823\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:15:53.907883 master-0 kubenswrapper[7146]: I0318 13:15:53.907647 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5879ced8-4ac1-40e3-bf93-38b8a7497823-var-lock\") pod \"installer-2-master-0\" (UID: \"5879ced8-4ac1-40e3-bf93-38b8a7497823\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:15:53.907883 master-0 kubenswrapper[7146]: I0318 13:15:53.907840 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5879ced8-4ac1-40e3-bf93-38b8a7497823-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5879ced8-4ac1-40e3-bf93-38b8a7497823\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:15:53.908348 master-0 kubenswrapper[7146]: I0318 13:15:53.908302 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5879ced8-4ac1-40e3-bf93-38b8a7497823-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5879ced8-4ac1-40e3-bf93-38b8a7497823\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:15:53.908475 master-0 kubenswrapper[7146]: I0318 13:15:53.908433 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5879ced8-4ac1-40e3-bf93-38b8a7497823-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5879ced8-4ac1-40e3-bf93-38b8a7497823\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:15:53.934808 master-0 kubenswrapper[7146]: I0318 13:15:53.934743 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5879ced8-4ac1-40e3-bf93-38b8a7497823-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5879ced8-4ac1-40e3-bf93-38b8a7497823\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:15:54.062351 master-0 kubenswrapper[7146]: I0318 13:15:54.062154 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 13:15:54.328325 master-0 kubenswrapper[7146]: I0318 13:15:54.328221 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:54.328325 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:54.328325 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:54.328325 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:54.328625 master-0 kubenswrapper[7146]: I0318 13:15:54.328596 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:54.448658 master-0 kubenswrapper[7146]: I0318 13:15:54.448612 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 13:15:54.452199 master-0 kubenswrapper[7146]: W0318 13:15:54.452161 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5879ced8_4ac1_40e3_bf93_38b8a7497823.slice/crio-792ac459c58c5c5c87f43812b7188a5914dfddf16111da68a4a9f5f5502a61fc WatchSource:0}: Error finding container 792ac459c58c5c5c87f43812b7188a5914dfddf16111da68a4a9f5f5502a61fc: Status 404 returned error can't find the container with id 792ac459c58c5c5c87f43812b7188a5914dfddf16111da68a4a9f5f5502a61fc Mar 18 13:15:55.113586 master-0 kubenswrapper[7146]: I0318 13:15:55.113150 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"5879ced8-4ac1-40e3-bf93-38b8a7497823","Type":"ContainerStarted","Data":"eb1bc4c2de4eef02c4efa419b662829eddc1e0031cc060ee0744bc0347f66eeb"} Mar 18 13:15:55.113586 master-0 kubenswrapper[7146]: I0318 13:15:55.113558 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"5879ced8-4ac1-40e3-bf93-38b8a7497823","Type":"ContainerStarted","Data":"792ac459c58c5c5c87f43812b7188a5914dfddf16111da68a4a9f5f5502a61fc"} Mar 18 13:15:55.137542 master-0 kubenswrapper[7146]: I0318 13:15:55.137466 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.13744971 podStartE2EDuration="2.13744971s" podCreationTimestamp="2026-03-18 13:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:15:55.136659988 +0000 UTC m=+463.944877369" watchObservedRunningTime="2026-03-18 13:15:55.13744971 +0000 UTC m=+463.945667071" Mar 18 13:15:55.329117 master-0 kubenswrapper[7146]: I0318 13:15:55.329038 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:55.329117 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:55.329117 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:55.329117 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:55.329117 master-0 kubenswrapper[7146]: I0318 13:15:55.329115 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:56.328220 master-0 kubenswrapper[7146]: I0318 13:15:56.328129 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:56.328220 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:56.328220 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:56.328220 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:56.328220 master-0 kubenswrapper[7146]: I0318 13:15:56.328188 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:57.328336 master-0 kubenswrapper[7146]: I0318 13:15:57.328238 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:57.328336 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:57.328336 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:57.328336 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:57.328336 master-0 kubenswrapper[7146]: I0318 13:15:57.328311 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:58.328628 master-0 kubenswrapper[7146]: I0318 13:15:58.328551 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:58.328628 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:58.328628 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:58.328628 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:58.329354 master-0 kubenswrapper[7146]: I0318 13:15:58.328635 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:59.327154 master-0 kubenswrapper[7146]: I0318 13:15:59.327072 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:59.327154 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:15:59.327154 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:15:59.327154 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:15:59.327425 master-0 kubenswrapper[7146]: I0318 13:15:59.327164 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:00.327488 master-0 kubenswrapper[7146]: I0318 13:16:00.327429 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:00.327488 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:00.327488 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:00.327488 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:00.328054 master-0 kubenswrapper[7146]: I0318 13:16:00.327498 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:01.327806 master-0 kubenswrapper[7146]: I0318 13:16:01.327763 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:01.327806 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:01.327806 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:01.327806 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:01.328603 master-0 kubenswrapper[7146]: I0318 13:16:01.328565 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:02.327191 master-0 kubenswrapper[7146]: I0318 13:16:02.327124 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:02.327191 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:02.327191 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:02.327191 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:02.327486 master-0 kubenswrapper[7146]: I0318 13:16:02.327195 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:03.333644 master-0 kubenswrapper[7146]: I0318 13:16:03.333581 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:03.333644 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:03.333644 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:03.333644 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:03.334462 master-0 kubenswrapper[7146]: I0318 13:16:03.333667 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:04.328199 master-0 kubenswrapper[7146]: I0318 13:16:04.328160 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:04.328199 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:04.328199 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:04.328199 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:04.328574 master-0 kubenswrapper[7146]: I0318 13:16:04.328546 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:05.328571 master-0 kubenswrapper[7146]: I0318 13:16:05.328498 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:05.328571 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:05.328571 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:05.328571 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:05.328571 master-0 kubenswrapper[7146]: I0318 13:16:05.328561 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:06.328408 master-0 kubenswrapper[7146]: I0318 13:16:06.328359 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:06.328408 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:06.328408 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:06.328408 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:06.329108 master-0 kubenswrapper[7146]: I0318 13:16:06.328430 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:07.328237 master-0 kubenswrapper[7146]: I0318 13:16:07.328181 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:07.328237 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:07.328237 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:07.328237 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:07.328237 master-0 kubenswrapper[7146]: I0318 13:16:07.328249 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:08.327244 master-0 kubenswrapper[7146]: I0318 13:16:08.327193 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:08.327244 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:08.327244 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:08.327244 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:08.327805 master-0 kubenswrapper[7146]: I0318 13:16:08.327253 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:09.328022 master-0 kubenswrapper[7146]: I0318 13:16:09.327977 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:09.328022 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:09.328022 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:09.328022 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:09.328776 master-0 kubenswrapper[7146]: I0318 13:16:09.328043 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:10.328219 master-0 kubenswrapper[7146]: I0318 13:16:10.328167 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:10.328219 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:10.328219 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:10.328219 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:10.328858 master-0 kubenswrapper[7146]: I0318 13:16:10.328258 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:11.328361 master-0 kubenswrapper[7146]: I0318 13:16:11.328297 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:11.328361 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:11.328361 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:11.328361 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:11.328910 master-0 kubenswrapper[7146]: I0318 13:16:11.328365 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:12.328226 master-0 kubenswrapper[7146]: I0318 13:16:12.328164 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:12.328226 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:12.328226 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:12.328226 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:12.328797 master-0 kubenswrapper[7146]: I0318 13:16:12.328260 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:13.328798 master-0 kubenswrapper[7146]: I0318 13:16:13.328720 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:13.328798 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:13.328798 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:13.328798 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:13.329418 master-0 kubenswrapper[7146]: I0318 13:16:13.328922 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:14.329220 master-0 kubenswrapper[7146]: I0318 13:16:14.329143 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:14.329220 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:14.329220 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:14.329220 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:14.330449 master-0 kubenswrapper[7146]: I0318 13:16:14.329244 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:15.327350 master-0 kubenswrapper[7146]: I0318 13:16:15.327277 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:15.327350 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:15.327350 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:15.327350 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:15.327350 master-0 kubenswrapper[7146]: I0318 13:16:15.327342 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:16.327922 master-0 kubenswrapper[7146]: I0318 13:16:16.327852 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:16.327922 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:16.327922 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:16.327922 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:16.328550 master-0 kubenswrapper[7146]: I0318 13:16:16.327953 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:17.328362 master-0 kubenswrapper[7146]: I0318 13:16:17.328197 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:17.328362 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:17.328362 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:17.328362 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:17.328362 master-0 kubenswrapper[7146]: I0318 13:16:17.328270 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:18.327322 master-0 kubenswrapper[7146]: I0318 13:16:18.327256 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:18.327322 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:18.327322 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:18.327322 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:18.327656 master-0 kubenswrapper[7146]: I0318 13:16:18.327342 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:19.328172 master-0 kubenswrapper[7146]: I0318 13:16:19.328060 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:19.328172 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:19.328172 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:19.328172 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:19.328172 master-0 kubenswrapper[7146]: I0318 13:16:19.328153 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:20.327628 master-0 kubenswrapper[7146]: I0318 13:16:20.327552 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:20.327628 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:20.327628 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:20.327628 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:20.327628 master-0 kubenswrapper[7146]: I0318 13:16:20.327621 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:21.328986 master-0 kubenswrapper[7146]: I0318 13:16:21.328908 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:21.328986 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:21.328986 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:21.328986 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:21.329639 master-0 kubenswrapper[7146]: I0318 13:16:21.328999 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:21.955011 master-0 kubenswrapper[7146]: I0318 13:16:21.954913 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 18 13:16:21.956043 master-0 kubenswrapper[7146]: I0318 13:16:21.955971 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 13:16:21.957512 master-0 kubenswrapper[7146]: I0318 13:16:21.957477 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-kfzqm" Mar 18 13:16:21.958012 master-0 kubenswrapper[7146]: I0318 13:16:21.957892 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 13:16:21.966333 master-0 kubenswrapper[7146]: I0318 13:16:21.966246 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 18 13:16:22.004794 master-0 kubenswrapper[7146]: I0318 13:16:22.004735 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"2fca2c29-3791-43b8-97f1-a9a6d58ec92d\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 13:16:22.004987 master-0 kubenswrapper[7146]: I0318 13:16:22.004808 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-kube-api-access\") pod \"installer-5-master-0\" (UID: \"2fca2c29-3791-43b8-97f1-a9a6d58ec92d\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 13:16:22.004987 master-0 kubenswrapper[7146]: I0318 13:16:22.004846 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-var-lock\") pod \"installer-5-master-0\" (UID: \"2fca2c29-3791-43b8-97f1-a9a6d58ec92d\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 13:16:22.106504 master-0 kubenswrapper[7146]: I0318 13:16:22.106429 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"2fca2c29-3791-43b8-97f1-a9a6d58ec92d\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 13:16:22.106732 master-0 kubenswrapper[7146]: I0318 13:16:22.106521 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-kube-api-access\") pod \"installer-5-master-0\" (UID: \"2fca2c29-3791-43b8-97f1-a9a6d58ec92d\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 13:16:22.106732 master-0 kubenswrapper[7146]: I0318 13:16:22.106557 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-var-lock\") pod \"installer-5-master-0\" (UID: \"2fca2c29-3791-43b8-97f1-a9a6d58ec92d\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 13:16:22.106732 master-0 kubenswrapper[7146]: I0318 13:16:22.106608 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"2fca2c29-3791-43b8-97f1-a9a6d58ec92d\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 13:16:22.106732 master-0 kubenswrapper[7146]: I0318 13:16:22.106682 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-var-lock\") pod \"installer-5-master-0\" (UID: \"2fca2c29-3791-43b8-97f1-a9a6d58ec92d\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 13:16:22.123152 master-0 kubenswrapper[7146]: I0318 13:16:22.123093 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-kube-api-access\") pod \"installer-5-master-0\" (UID: \"2fca2c29-3791-43b8-97f1-a9a6d58ec92d\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 13:16:22.288684 master-0 kubenswrapper[7146]: I0318 13:16:22.288526 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 13:16:22.332659 master-0 kubenswrapper[7146]: I0318 13:16:22.332587 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:22.332659 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:22.332659 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:22.332659 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:22.333308 master-0 kubenswrapper[7146]: I0318 13:16:22.332669 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:22.926703 master-0 kubenswrapper[7146]: I0318 13:16:22.926640 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 18 13:16:23.288779 master-0 kubenswrapper[7146]: I0318 13:16:23.288700 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"2fca2c29-3791-43b8-97f1-a9a6d58ec92d","Type":"ContainerStarted","Data":"abd6d9cd064ffc49598289235ab6b846f24e69f6bc0b898e367dc9ec6a8b35e1"} Mar 18 13:16:23.328161 master-0 kubenswrapper[7146]: I0318 13:16:23.328097 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:23.328161 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:23.328161 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:23.328161 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:23.328490 master-0 kubenswrapper[7146]: I0318 13:16:23.328167 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:23.484144 master-0 kubenswrapper[7146]: I0318 13:16:23.481340 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mptsw"] Mar 18 13:16:23.484731 master-0 kubenswrapper[7146]: I0318 13:16:23.484670 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:23.490202 master-0 kubenswrapper[7146]: I0318 13:16:23.487922 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 18 13:16:23.490202 master-0 kubenswrapper[7146]: I0318 13:16:23.488247 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-qzrnm" Mar 18 13:16:23.638859 master-0 kubenswrapper[7146]: I0318 13:16:23.636444 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mptsw\" (UID: \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:23.638859 master-0 kubenswrapper[7146]: I0318 13:16:23.636525 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mptsw\" (UID: \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:23.638859 master-0 kubenswrapper[7146]: I0318 13:16:23.636550 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwhq2\" (UniqueName: \"kubernetes.io/projected/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-kube-api-access-wwhq2\") pod \"cni-sysctl-allowlist-ds-mptsw\" (UID: \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:23.638859 master-0 kubenswrapper[7146]: I0318 13:16:23.636574 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-ready\") pod \"cni-sysctl-allowlist-ds-mptsw\" (UID: \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:23.737635 master-0 kubenswrapper[7146]: I0318 13:16:23.737548 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mptsw\" (UID: \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:23.737635 master-0 kubenswrapper[7146]: I0318 13:16:23.737613 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwhq2\" (UniqueName: \"kubernetes.io/projected/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-kube-api-access-wwhq2\") pod \"cni-sysctl-allowlist-ds-mptsw\" (UID: \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:23.738093 master-0 kubenswrapper[7146]: I0318 13:16:23.737760 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-ready\") pod \"cni-sysctl-allowlist-ds-mptsw\" (UID: \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:23.738093 master-0 kubenswrapper[7146]: I0318 13:16:23.737767 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mptsw\" (UID: \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:23.738093 master-0 kubenswrapper[7146]: I0318 13:16:23.737896 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mptsw\" (UID: \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:23.738424 master-0 kubenswrapper[7146]: I0318 13:16:23.738387 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-ready\") pod \"cni-sysctl-allowlist-ds-mptsw\" (UID: \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:23.739286 master-0 kubenswrapper[7146]: I0318 13:16:23.739221 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mptsw\" (UID: \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:23.755836 master-0 kubenswrapper[7146]: I0318 13:16:23.755758 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwhq2\" (UniqueName: \"kubernetes.io/projected/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-kube-api-access-wwhq2\") pod \"cni-sysctl-allowlist-ds-mptsw\" (UID: \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:23.808287 master-0 kubenswrapper[7146]: I0318 13:16:23.808163 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:23.830866 master-0 kubenswrapper[7146]: W0318 13:16:23.830743 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod956513bf_3b98_4b0d_aca7_ccc3fdf8ae12.slice/crio-fd6b0b032151d2881a4a54074951594cc7f0bc7d221bf627ffe89c45750bb858 WatchSource:0}: Error finding container fd6b0b032151d2881a4a54074951594cc7f0bc7d221bf627ffe89c45750bb858: Status 404 returned error can't find the container with id fd6b0b032151d2881a4a54074951594cc7f0bc7d221bf627ffe89c45750bb858 Mar 18 13:16:24.295544 master-0 kubenswrapper[7146]: I0318 13:16:24.295411 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"2fca2c29-3791-43b8-97f1-a9a6d58ec92d","Type":"ContainerStarted","Data":"e194112a7651927c16369879335d3ba30bda7302ae714dc813e610c582b27c4a"} Mar 18 13:16:24.296591 master-0 kubenswrapper[7146]: I0318 13:16:24.296553 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" event={"ID":"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12","Type":"ContainerStarted","Data":"30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1"} Mar 18 13:16:24.296702 master-0 kubenswrapper[7146]: I0318 13:16:24.296599 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" event={"ID":"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12","Type":"ContainerStarted","Data":"fd6b0b032151d2881a4a54074951594cc7f0bc7d221bf627ffe89c45750bb858"} Mar 18 13:16:24.296914 master-0 kubenswrapper[7146]: I0318 13:16:24.296897 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:24.317160 master-0 kubenswrapper[7146]: I0318 13:16:24.317088 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=3.317063902 podStartE2EDuration="3.317063902s" podCreationTimestamp="2026-03-18 13:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:16:24.315504257 +0000 UTC m=+493.123721628" watchObservedRunningTime="2026-03-18 13:16:24.317063902 +0000 UTC m=+493.125281283" Mar 18 13:16:24.328636 master-0 kubenswrapper[7146]: I0318 13:16:24.328596 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:24.328636 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:24.328636 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:24.328636 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:24.328966 master-0 kubenswrapper[7146]: I0318 13:16:24.328928 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:24.375817 master-0 kubenswrapper[7146]: I0318 13:16:24.375748 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" podStartSLOduration=1.375729051 podStartE2EDuration="1.375729051s" podCreationTimestamp="2026-03-18 13:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:16:24.374255829 +0000 UTC m=+493.182473210" watchObservedRunningTime="2026-03-18 13:16:24.375729051 +0000 UTC m=+493.183946412" Mar 18 13:16:25.321701 master-0 kubenswrapper[7146]: I0318 13:16:25.321644 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:25.327134 master-0 kubenswrapper[7146]: I0318 13:16:25.327083 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:25.327134 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:25.327134 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:25.327134 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:25.327333 master-0 kubenswrapper[7146]: I0318 13:16:25.327145 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:25.564359 master-0 kubenswrapper[7146]: I0318 13:16:25.564290 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mptsw"] Mar 18 13:16:25.723420 master-0 kubenswrapper[7146]: I0318 13:16:25.723355 7146 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 13:16:25.723616 master-0 kubenswrapper[7146]: E0318 13:16:25.723376 7146 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/etcd-pod.yaml\": /etc/kubernetes/manifests/etcd-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Mar 18 13:16:25.723738 master-0 kubenswrapper[7146]: I0318 13:16:25.723681 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" containerID="cri-o://62fe9beeb36b3ac56bbe68283ac1166884e88bd12efe7396dc640369daf3dcb5" gracePeriod=30 Mar 18 13:16:25.723858 master-0 kubenswrapper[7146]: I0318 13:16:25.723746 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" containerID="cri-o://9b07e126c556fda409310ce3f4ade174ec8553d71974efdb020b209f4f4c150c" gracePeriod=30 Mar 18 13:16:25.723858 master-0 kubenswrapper[7146]: I0318 13:16:25.723833 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" containerID="cri-o://a549d429b07d4ad9ab9bd2e9ea8b69fcd59fa7572d07446c525cdda08c760cbc" gracePeriod=30 Mar 18 13:16:25.723970 master-0 kubenswrapper[7146]: I0318 13:16:25.723884 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" containerID="cri-o://a34f9c4245670ed076e7d03ac590e3edd882ed0cb3f0c4cb78b2d6e573c86dd6" gracePeriod=30 Mar 18 13:16:25.723970 master-0 kubenswrapper[7146]: I0318 13:16:25.723926 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" containerID="cri-o://8a562044d18e998152ec295ff2940b94453d06068d629f03893f3bd11dac494f" gracePeriod=30 Mar 18 13:16:25.726302 master-0 kubenswrapper[7146]: I0318 13:16:25.726272 7146 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 13:16:25.726615 master-0 kubenswrapper[7146]: E0318 13:16:25.726499 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 18 13:16:25.726615 master-0 kubenswrapper[7146]: I0318 13:16:25.726512 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 18 13:16:25.726615 master-0 kubenswrapper[7146]: E0318 13:16:25.726524 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 13:16:25.726615 master-0 kubenswrapper[7146]: I0318 13:16:25.726530 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 13:16:25.726615 master-0 kubenswrapper[7146]: E0318 13:16:25.726543 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 13:16:25.726615 master-0 kubenswrapper[7146]: I0318 13:16:25.726548 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 13:16:25.726615 master-0 kubenswrapper[7146]: E0318 13:16:25.726556 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 13:16:25.726615 master-0 kubenswrapper[7146]: I0318 13:16:25.726561 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 13:16:25.726615 master-0 kubenswrapper[7146]: E0318 13:16:25.726574 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 13:16:25.726615 master-0 kubenswrapper[7146]: I0318 13:16:25.726580 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 13:16:25.744242 master-0 kubenswrapper[7146]: E0318 13:16:25.741322 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 18 13:16:25.744242 master-0 kubenswrapper[7146]: I0318 13:16:25.741392 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 18 13:16:25.744242 master-0 kubenswrapper[7146]: E0318 13:16:25.741452 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 18 13:16:25.744242 master-0 kubenswrapper[7146]: I0318 13:16:25.741461 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 18 13:16:25.744242 master-0 kubenswrapper[7146]: E0318 13:16:25.741530 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 13:16:25.744242 master-0 kubenswrapper[7146]: I0318 13:16:25.741537 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 13:16:25.744242 master-0 kubenswrapper[7146]: I0318 13:16:25.741857 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 13:16:25.744242 master-0 kubenswrapper[7146]: I0318 13:16:25.741871 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 13:16:25.744242 master-0 kubenswrapper[7146]: I0318 13:16:25.741884 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 13:16:25.744242 master-0 kubenswrapper[7146]: I0318 13:16:25.741892 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 13:16:25.744242 master-0 kubenswrapper[7146]: I0318 13:16:25.741904 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 13:16:25.865662 master-0 kubenswrapper[7146]: I0318 13:16:25.865589 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:25.865662 master-0 kubenswrapper[7146]: I0318 13:16:25.865671 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:25.865971 master-0 kubenswrapper[7146]: I0318 13:16:25.865748 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:25.865971 master-0 kubenswrapper[7146]: I0318 13:16:25.865795 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:25.865971 master-0 kubenswrapper[7146]: I0318 13:16:25.865818 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:25.865971 master-0 kubenswrapper[7146]: I0318 13:16:25.865841 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:25.967201 master-0 kubenswrapper[7146]: I0318 13:16:25.967124 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:25.967201 master-0 kubenswrapper[7146]: I0318 13:16:25.967204 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:25.967424 master-0 kubenswrapper[7146]: I0318 13:16:25.967250 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:25.967424 master-0 kubenswrapper[7146]: I0318 13:16:25.967294 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:25.967424 master-0 kubenswrapper[7146]: I0318 13:16:25.967327 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:25.967424 master-0 kubenswrapper[7146]: I0318 13:16:25.967353 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:25.968089 master-0 kubenswrapper[7146]: I0318 13:16:25.967446 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:25.968089 master-0 kubenswrapper[7146]: I0318 13:16:25.967494 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:25.968089 master-0 kubenswrapper[7146]: I0318 13:16:25.967523 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:25.968089 master-0 kubenswrapper[7146]: I0318 13:16:25.967550 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:25.968089 master-0 kubenswrapper[7146]: I0318 13:16:25.967579 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:25.968089 master-0 kubenswrapper[7146]: I0318 13:16:25.967609 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:26.309732 master-0 kubenswrapper[7146]: I0318 13:16:26.309598 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 13:16:26.310703 master-0 kubenswrapper[7146]: I0318 13:16:26.310655 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 13:16:26.312420 master-0 kubenswrapper[7146]: I0318 13:16:26.312370 7146 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="8a562044d18e998152ec295ff2940b94453d06068d629f03893f3bd11dac494f" exitCode=2 Mar 18 13:16:26.312420 master-0 kubenswrapper[7146]: I0318 13:16:26.312398 7146 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="9b07e126c556fda409310ce3f4ade174ec8553d71974efdb020b209f4f4c150c" exitCode=0 Mar 18 13:16:26.312420 master-0 kubenswrapper[7146]: I0318 13:16:26.312406 7146 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="a549d429b07d4ad9ab9bd2e9ea8b69fcd59fa7572d07446c525cdda08c760cbc" exitCode=2 Mar 18 13:16:26.327628 master-0 kubenswrapper[7146]: I0318 13:16:26.327558 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:26.327628 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:26.327628 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:26.327628 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:26.327628 master-0 kubenswrapper[7146]: I0318 13:16:26.327629 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:27.318844 master-0 kubenswrapper[7146]: I0318 13:16:27.318723 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" podUID="956513bf-3b98-4b0d-aca7-ccc3fdf8ae12" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1" gracePeriod=30 Mar 18 13:16:27.328165 master-0 kubenswrapper[7146]: I0318 13:16:27.328090 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:27.328165 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:27.328165 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:27.328165 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:27.329117 master-0 kubenswrapper[7146]: I0318 13:16:27.328166 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:28.327675 master-0 kubenswrapper[7146]: I0318 13:16:28.327614 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:28.327675 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:28.327675 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:28.327675 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:28.328045 master-0 kubenswrapper[7146]: I0318 13:16:28.327692 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:29.328341 master-0 kubenswrapper[7146]: I0318 13:16:29.328297 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:29.328341 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:29.328341 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:29.328341 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:29.328895 master-0 kubenswrapper[7146]: I0318 13:16:29.328361 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:30.327561 master-0 kubenswrapper[7146]: I0318 13:16:30.327405 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:30.327561 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:30.327561 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:30.327561 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:30.327561 master-0 kubenswrapper[7146]: I0318 13:16:30.327550 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:31.327662 master-0 kubenswrapper[7146]: I0318 13:16:31.327589 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:31.327662 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:31.327662 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:31.327662 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:31.328263 master-0 kubenswrapper[7146]: I0318 13:16:31.327675 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:32.327389 master-0 kubenswrapper[7146]: I0318 13:16:32.327349 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:32.327389 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:32.327389 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:32.327389 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:32.327724 master-0 kubenswrapper[7146]: I0318 13:16:32.327697 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:33.329109 master-0 kubenswrapper[7146]: I0318 13:16:33.329016 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:33.329109 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:33.329109 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:33.329109 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:33.329109 master-0 kubenswrapper[7146]: I0318 13:16:33.329124 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:33.810686 master-0 kubenswrapper[7146]: E0318 13:16:33.810594 7146 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 13:16:33.812358 master-0 kubenswrapper[7146]: E0318 13:16:33.812253 7146 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 13:16:33.813636 master-0 kubenswrapper[7146]: E0318 13:16:33.813576 7146 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 13:16:33.813732 master-0 kubenswrapper[7146]: E0318 13:16:33.813648 7146 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" podUID="956513bf-3b98-4b0d-aca7-ccc3fdf8ae12" containerName="kube-multus-additional-cni-plugins" Mar 18 13:16:34.329063 master-0 kubenswrapper[7146]: I0318 13:16:34.328921 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:34.329063 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:16:34.329063 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:16:34.329063 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:16:34.329745 master-0 kubenswrapper[7146]: I0318 13:16:34.329072 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:34.329745 master-0 kubenswrapper[7146]: I0318 13:16:34.329146 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:16:34.330732 master-0 kubenswrapper[7146]: I0318 13:16:34.330666 7146 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"31665db688945aa094b6891895ce672425d61018bce7b516675fdc844fb9eb7e"} pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" containerMessage="Container router failed startup probe, will be restarted" Mar 18 13:16:34.330848 master-0 kubenswrapper[7146]: I0318 13:16:34.330751 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" containerID="cri-o://31665db688945aa094b6891895ce672425d61018bce7b516675fdc844fb9eb7e" gracePeriod=3600 Mar 18 13:16:39.398663 master-0 kubenswrapper[7146]: I0318 13:16:39.398596 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager/0.log" Mar 18 13:16:39.398663 master-0 kubenswrapper[7146]: I0318 13:16:39.398649 7146 generic.go:334] "Generic (PLEG): container finished" podID="f88d0f62c0688ab1909dc97f30d381b9" containerID="af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1" exitCode=1 Mar 18 13:16:39.398663 master-0 kubenswrapper[7146]: I0318 13:16:39.398680 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f88d0f62c0688ab1909dc97f30d381b9","Type":"ContainerDied","Data":"af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1"} Mar 18 13:16:39.399523 master-0 kubenswrapper[7146]: I0318 13:16:39.399160 7146 scope.go:117] "RemoveContainer" containerID="af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1" Mar 18 13:16:40.406503 master-0 kubenswrapper[7146]: I0318 13:16:40.406442 7146 generic.go:334] "Generic (PLEG): container finished" podID="5879ced8-4ac1-40e3-bf93-38b8a7497823" containerID="eb1bc4c2de4eef02c4efa419b662829eddc1e0031cc060ee0744bc0347f66eeb" exitCode=0 Mar 18 13:16:40.407275 master-0 kubenswrapper[7146]: I0318 13:16:40.406546 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"5879ced8-4ac1-40e3-bf93-38b8a7497823","Type":"ContainerDied","Data":"eb1bc4c2de4eef02c4efa419b662829eddc1e0031cc060ee0744bc0347f66eeb"} Mar 18 13:16:40.409766 master-0 kubenswrapper[7146]: I0318 13:16:40.409694 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager/0.log" Mar 18 13:16:40.409905 master-0 kubenswrapper[7146]: I0318 13:16:40.409797 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f88d0f62c0688ab1909dc97f30d381b9","Type":"ContainerStarted","Data":"532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29"} Mar 18 13:16:41.557985 master-0 kubenswrapper[7146]: I0318 13:16:41.556331 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:16:41.557985 master-0 kubenswrapper[7146]: I0318 13:16:41.556691 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:16:41.563055 master-0 kubenswrapper[7146]: I0318 13:16:41.563035 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:16:41.752350 master-0 kubenswrapper[7146]: I0318 13:16:41.752295 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 13:16:41.888101 master-0 kubenswrapper[7146]: I0318 13:16:41.888033 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5879ced8-4ac1-40e3-bf93-38b8a7497823-kube-api-access\") pod \"5879ced8-4ac1-40e3-bf93-38b8a7497823\" (UID: \"5879ced8-4ac1-40e3-bf93-38b8a7497823\") " Mar 18 13:16:41.888101 master-0 kubenswrapper[7146]: I0318 13:16:41.888098 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5879ced8-4ac1-40e3-bf93-38b8a7497823-kubelet-dir\") pod \"5879ced8-4ac1-40e3-bf93-38b8a7497823\" (UID: \"5879ced8-4ac1-40e3-bf93-38b8a7497823\") " Mar 18 13:16:41.888367 master-0 kubenswrapper[7146]: I0318 13:16:41.888131 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5879ced8-4ac1-40e3-bf93-38b8a7497823-var-lock\") pod \"5879ced8-4ac1-40e3-bf93-38b8a7497823\" (UID: \"5879ced8-4ac1-40e3-bf93-38b8a7497823\") " Mar 18 13:16:41.888367 master-0 kubenswrapper[7146]: I0318 13:16:41.888279 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5879ced8-4ac1-40e3-bf93-38b8a7497823-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5879ced8-4ac1-40e3-bf93-38b8a7497823" (UID: "5879ced8-4ac1-40e3-bf93-38b8a7497823"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:16:41.888426 master-0 kubenswrapper[7146]: I0318 13:16:41.888365 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5879ced8-4ac1-40e3-bf93-38b8a7497823-var-lock" (OuterVolumeSpecName: "var-lock") pod "5879ced8-4ac1-40e3-bf93-38b8a7497823" (UID: "5879ced8-4ac1-40e3-bf93-38b8a7497823"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:16:41.888701 master-0 kubenswrapper[7146]: I0318 13:16:41.888651 7146 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5879ced8-4ac1-40e3-bf93-38b8a7497823-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:41.888759 master-0 kubenswrapper[7146]: I0318 13:16:41.888705 7146 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5879ced8-4ac1-40e3-bf93-38b8a7497823-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:41.891290 master-0 kubenswrapper[7146]: I0318 13:16:41.891239 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5879ced8-4ac1-40e3-bf93-38b8a7497823-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5879ced8-4ac1-40e3-bf93-38b8a7497823" (UID: "5879ced8-4ac1-40e3-bf93-38b8a7497823"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:16:41.990376 master-0 kubenswrapper[7146]: I0318 13:16:41.990228 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5879ced8-4ac1-40e3-bf93-38b8a7497823-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:42.441854 master-0 kubenswrapper[7146]: I0318 13:16:42.441788 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"5879ced8-4ac1-40e3-bf93-38b8a7497823","Type":"ContainerDied","Data":"792ac459c58c5c5c87f43812b7188a5914dfddf16111da68a4a9f5f5502a61fc"} Mar 18 13:16:42.441854 master-0 kubenswrapper[7146]: I0318 13:16:42.441841 7146 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="792ac459c58c5c5c87f43812b7188a5914dfddf16111da68a4a9f5f5502a61fc" Mar 18 13:16:42.441854 master-0 kubenswrapper[7146]: I0318 13:16:42.441805 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 13:16:42.444805 master-0 kubenswrapper[7146]: I0318 13:16:42.444740 7146 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="fa1d385ac095a8d1dc31f1e6dbbfd78274773bc8abd30fc3ee99e963ef88d538" exitCode=1 Mar 18 13:16:42.445063 master-0 kubenswrapper[7146]: I0318 13:16:42.444920 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerDied","Data":"fa1d385ac095a8d1dc31f1e6dbbfd78274773bc8abd30fc3ee99e963ef88d538"} Mar 18 13:16:42.445137 master-0 kubenswrapper[7146]: I0318 13:16:42.445093 7146 scope.go:117] "RemoveContainer" containerID="40d2f52b6191fb64bc515d1f7e32cd3a0019730cc68c0ff9674d239a2fee21db" Mar 18 13:16:42.445568 master-0 kubenswrapper[7146]: I0318 13:16:42.445531 7146 scope.go:117] "RemoveContainer" containerID="fa1d385ac095a8d1dc31f1e6dbbfd78274773bc8abd30fc3ee99e963ef88d538" Mar 18 13:16:42.445728 master-0 kubenswrapper[7146]: E0318 13:16:42.445702 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=bootstrap-kube-scheduler-master-0_kube-system(c83737980b9ee109184b1d78e942cf36)\"" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="c83737980b9ee109184b1d78e942cf36" Mar 18 13:16:43.710365 master-0 kubenswrapper[7146]: E0318 13:16:43.710283 7146 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" Mar 18 13:16:43.810898 master-0 kubenswrapper[7146]: E0318 13:16:43.810817 7146 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 13:16:43.812433 master-0 kubenswrapper[7146]: E0318 13:16:43.812373 7146 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 13:16:43.813719 master-0 kubenswrapper[7146]: E0318 13:16:43.813658 7146 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 13:16:43.813829 master-0 kubenswrapper[7146]: E0318 13:16:43.813711 7146 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" podUID="956513bf-3b98-4b0d-aca7-ccc3fdf8ae12" containerName="kube-multus-additional-cni-plugins" Mar 18 13:16:51.558882 master-0 kubenswrapper[7146]: I0318 13:16:51.558842 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:16:53.711637 master-0 kubenswrapper[7146]: E0318 13:16:53.711404 7146 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:16:53.811080 master-0 kubenswrapper[7146]: E0318 13:16:53.811001 7146 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 13:16:53.812517 master-0 kubenswrapper[7146]: E0318 13:16:53.812481 7146 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 13:16:53.813584 master-0 kubenswrapper[7146]: E0318 13:16:53.813553 7146 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 13:16:53.813663 master-0 kubenswrapper[7146]: E0318 13:16:53.813595 7146 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" podUID="956513bf-3b98-4b0d-aca7-ccc3fdf8ae12" containerName="kube-multus-additional-cni-plugins" Mar 18 13:16:54.357956 master-0 kubenswrapper[7146]: I0318 13:16:54.357501 7146 scope.go:117] "RemoveContainer" containerID="fa1d385ac095a8d1dc31f1e6dbbfd78274773bc8abd30fc3ee99e963ef88d538" Mar 18 13:16:55.538194 master-0 kubenswrapper[7146]: I0318 13:16:55.538121 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"311fa0a837fab2a478663d760de17d2a8ddc702068f88e4f3d424a59411456ff"} Mar 18 13:16:56.291234 master-0 kubenswrapper[7146]: I0318 13:16:56.291190 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 13:16:56.292355 master-0 kubenswrapper[7146]: I0318 13:16:56.292310 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 13:16:56.293182 master-0 kubenswrapper[7146]: I0318 13:16:56.293141 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 18 13:16:56.293546 master-0 kubenswrapper[7146]: I0318 13:16:56.293518 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 13:16:56.294611 master-0 kubenswrapper[7146]: I0318 13:16:56.294575 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 13:16:56.389852 master-0 kubenswrapper[7146]: I0318 13:16:56.389797 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 13:16:56.390207 master-0 kubenswrapper[7146]: I0318 13:16:56.390187 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 13:16:56.390317 master-0 kubenswrapper[7146]: I0318 13:16:56.390304 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 13:16:56.390411 master-0 kubenswrapper[7146]: I0318 13:16:56.390046 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir" (OuterVolumeSpecName: "data-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:16:56.390452 master-0 kubenswrapper[7146]: I0318 13:16:56.390224 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:16:56.390535 master-0 kubenswrapper[7146]: I0318 13:16:56.390523 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 13:16:56.390635 master-0 kubenswrapper[7146]: I0318 13:16:56.390618 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 13:16:56.390742 master-0 kubenswrapper[7146]: I0318 13:16:56.390729 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 13:16:56.390899 master-0 kubenswrapper[7146]: I0318 13:16:56.390553 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:16:56.390976 master-0 kubenswrapper[7146]: I0318 13:16:56.390596 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:16:56.390976 master-0 kubenswrapper[7146]: I0318 13:16:56.390693 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir" (OuterVolumeSpecName: "log-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:16:56.390976 master-0 kubenswrapper[7146]: I0318 13:16:56.390784 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:16:56.391266 master-0 kubenswrapper[7146]: I0318 13:16:56.391245 7146 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:56.391355 master-0 kubenswrapper[7146]: I0318 13:16:56.391344 7146 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:56.391419 master-0 kubenswrapper[7146]: I0318 13:16:56.391410 7146 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:56.391472 master-0 kubenswrapper[7146]: I0318 13:16:56.391463 7146 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:56.391544 master-0 kubenswrapper[7146]: I0318 13:16:56.391532 7146 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:56.391611 master-0 kubenswrapper[7146]: I0318 13:16:56.391600 7146 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:56.545890 master-0 kubenswrapper[7146]: I0318 13:16:56.545789 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 13:16:56.547375 master-0 kubenswrapper[7146]: I0318 13:16:56.547348 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 13:16:56.548014 master-0 kubenswrapper[7146]: I0318 13:16:56.548000 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 18 13:16:56.548712 master-0 kubenswrapper[7146]: I0318 13:16:56.548666 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 13:16:56.549668 master-0 kubenswrapper[7146]: I0318 13:16:56.549619 7146 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="a34f9c4245670ed076e7d03ac590e3edd882ed0cb3f0c4cb78b2d6e573c86dd6" exitCode=137 Mar 18 13:16:56.549668 master-0 kubenswrapper[7146]: I0318 13:16:56.549654 7146 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="62fe9beeb36b3ac56bbe68283ac1166884e88bd12efe7396dc640369daf3dcb5" exitCode=137 Mar 18 13:16:56.549779 master-0 kubenswrapper[7146]: I0318 13:16:56.549754 7146 scope.go:117] "RemoveContainer" containerID="8a562044d18e998152ec295ff2940b94453d06068d629f03893f3bd11dac494f" Mar 18 13:16:56.549779 master-0 kubenswrapper[7146]: I0318 13:16:56.549757 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 13:16:56.566603 master-0 kubenswrapper[7146]: I0318 13:16:56.566581 7146 scope.go:117] "RemoveContainer" containerID="9b07e126c556fda409310ce3f4ade174ec8553d71974efdb020b209f4f4c150c" Mar 18 13:16:56.581482 master-0 kubenswrapper[7146]: I0318 13:16:56.581401 7146 scope.go:117] "RemoveContainer" containerID="a549d429b07d4ad9ab9bd2e9ea8b69fcd59fa7572d07446c525cdda08c760cbc" Mar 18 13:16:56.596281 master-0 kubenswrapper[7146]: I0318 13:16:56.596252 7146 scope.go:117] "RemoveContainer" containerID="a34f9c4245670ed076e7d03ac590e3edd882ed0cb3f0c4cb78b2d6e573c86dd6" Mar 18 13:16:56.613829 master-0 kubenswrapper[7146]: I0318 13:16:56.613703 7146 scope.go:117] "RemoveContainer" containerID="62fe9beeb36b3ac56bbe68283ac1166884e88bd12efe7396dc640369daf3dcb5" Mar 18 13:16:56.625448 master-0 kubenswrapper[7146]: I0318 13:16:56.625420 7146 scope.go:117] "RemoveContainer" containerID="79d04f4871a241f9be5403756ec9bc9fffb125a7eb22092e57838a02fa67798c" Mar 18 13:16:56.636885 master-0 kubenswrapper[7146]: I0318 13:16:56.636837 7146 scope.go:117] "RemoveContainer" containerID="82e68aa3646d78dccb543b630e55b612be0fbbc4dcea9fa843c34bed76f82c4d" Mar 18 13:16:56.648402 master-0 kubenswrapper[7146]: I0318 13:16:56.648360 7146 scope.go:117] "RemoveContainer" containerID="323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a" Mar 18 13:16:56.660689 master-0 kubenswrapper[7146]: I0318 13:16:56.660654 7146 scope.go:117] "RemoveContainer" containerID="8a562044d18e998152ec295ff2940b94453d06068d629f03893f3bd11dac494f" Mar 18 13:16:56.661120 master-0 kubenswrapper[7146]: E0318 13:16:56.661066 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a562044d18e998152ec295ff2940b94453d06068d629f03893f3bd11dac494f\": container with ID starting with 8a562044d18e998152ec295ff2940b94453d06068d629f03893f3bd11dac494f not found: ID does not exist" containerID="8a562044d18e998152ec295ff2940b94453d06068d629f03893f3bd11dac494f" Mar 18 13:16:56.661195 master-0 kubenswrapper[7146]: I0318 13:16:56.661118 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a562044d18e998152ec295ff2940b94453d06068d629f03893f3bd11dac494f"} err="failed to get container status \"8a562044d18e998152ec295ff2940b94453d06068d629f03893f3bd11dac494f\": rpc error: code = NotFound desc = could not find container \"8a562044d18e998152ec295ff2940b94453d06068d629f03893f3bd11dac494f\": container with ID starting with 8a562044d18e998152ec295ff2940b94453d06068d629f03893f3bd11dac494f not found: ID does not exist" Mar 18 13:16:56.661195 master-0 kubenswrapper[7146]: I0318 13:16:56.661146 7146 scope.go:117] "RemoveContainer" containerID="9b07e126c556fda409310ce3f4ade174ec8553d71974efdb020b209f4f4c150c" Mar 18 13:16:56.661492 master-0 kubenswrapper[7146]: E0318 13:16:56.661459 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b07e126c556fda409310ce3f4ade174ec8553d71974efdb020b209f4f4c150c\": container with ID starting with 9b07e126c556fda409310ce3f4ade174ec8553d71974efdb020b209f4f4c150c not found: ID does not exist" containerID="9b07e126c556fda409310ce3f4ade174ec8553d71974efdb020b209f4f4c150c" Mar 18 13:16:56.661557 master-0 kubenswrapper[7146]: I0318 13:16:56.661489 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b07e126c556fda409310ce3f4ade174ec8553d71974efdb020b209f4f4c150c"} err="failed to get container status \"9b07e126c556fda409310ce3f4ade174ec8553d71974efdb020b209f4f4c150c\": rpc error: code = NotFound desc = could not find container \"9b07e126c556fda409310ce3f4ade174ec8553d71974efdb020b209f4f4c150c\": container with ID starting with 9b07e126c556fda409310ce3f4ade174ec8553d71974efdb020b209f4f4c150c not found: ID does not exist" Mar 18 13:16:56.661557 master-0 kubenswrapper[7146]: I0318 13:16:56.661509 7146 scope.go:117] "RemoveContainer" containerID="a549d429b07d4ad9ab9bd2e9ea8b69fcd59fa7572d07446c525cdda08c760cbc" Mar 18 13:16:56.661809 master-0 kubenswrapper[7146]: E0318 13:16:56.661790 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a549d429b07d4ad9ab9bd2e9ea8b69fcd59fa7572d07446c525cdda08c760cbc\": container with ID starting with a549d429b07d4ad9ab9bd2e9ea8b69fcd59fa7572d07446c525cdda08c760cbc not found: ID does not exist" containerID="a549d429b07d4ad9ab9bd2e9ea8b69fcd59fa7572d07446c525cdda08c760cbc" Mar 18 13:16:56.661809 master-0 kubenswrapper[7146]: I0318 13:16:56.661812 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a549d429b07d4ad9ab9bd2e9ea8b69fcd59fa7572d07446c525cdda08c760cbc"} err="failed to get container status \"a549d429b07d4ad9ab9bd2e9ea8b69fcd59fa7572d07446c525cdda08c760cbc\": rpc error: code = NotFound desc = could not find container \"a549d429b07d4ad9ab9bd2e9ea8b69fcd59fa7572d07446c525cdda08c760cbc\": container with ID starting with a549d429b07d4ad9ab9bd2e9ea8b69fcd59fa7572d07446c525cdda08c760cbc not found: ID does not exist" Mar 18 13:16:56.661809 master-0 kubenswrapper[7146]: I0318 13:16:56.661827 7146 scope.go:117] "RemoveContainer" containerID="a34f9c4245670ed076e7d03ac590e3edd882ed0cb3f0c4cb78b2d6e573c86dd6" Mar 18 13:16:56.662202 master-0 kubenswrapper[7146]: E0318 13:16:56.662165 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a34f9c4245670ed076e7d03ac590e3edd882ed0cb3f0c4cb78b2d6e573c86dd6\": container with ID starting with a34f9c4245670ed076e7d03ac590e3edd882ed0cb3f0c4cb78b2d6e573c86dd6 not found: ID does not exist" containerID="a34f9c4245670ed076e7d03ac590e3edd882ed0cb3f0c4cb78b2d6e573c86dd6" Mar 18 13:16:56.662270 master-0 kubenswrapper[7146]: I0318 13:16:56.662194 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a34f9c4245670ed076e7d03ac590e3edd882ed0cb3f0c4cb78b2d6e573c86dd6"} err="failed to get container status \"a34f9c4245670ed076e7d03ac590e3edd882ed0cb3f0c4cb78b2d6e573c86dd6\": rpc error: code = NotFound desc = could not find container \"a34f9c4245670ed076e7d03ac590e3edd882ed0cb3f0c4cb78b2d6e573c86dd6\": container with ID starting with a34f9c4245670ed076e7d03ac590e3edd882ed0cb3f0c4cb78b2d6e573c86dd6 not found: ID does not exist" Mar 18 13:16:56.662270 master-0 kubenswrapper[7146]: I0318 13:16:56.662215 7146 scope.go:117] "RemoveContainer" containerID="62fe9beeb36b3ac56bbe68283ac1166884e88bd12efe7396dc640369daf3dcb5" Mar 18 13:16:56.662452 master-0 kubenswrapper[7146]: E0318 13:16:56.662430 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62fe9beeb36b3ac56bbe68283ac1166884e88bd12efe7396dc640369daf3dcb5\": container with ID starting with 62fe9beeb36b3ac56bbe68283ac1166884e88bd12efe7396dc640369daf3dcb5 not found: ID does not exist" containerID="62fe9beeb36b3ac56bbe68283ac1166884e88bd12efe7396dc640369daf3dcb5" Mar 18 13:16:56.662590 master-0 kubenswrapper[7146]: I0318 13:16:56.662570 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62fe9beeb36b3ac56bbe68283ac1166884e88bd12efe7396dc640369daf3dcb5"} err="failed to get container status \"62fe9beeb36b3ac56bbe68283ac1166884e88bd12efe7396dc640369daf3dcb5\": rpc error: code = NotFound desc = could not find container \"62fe9beeb36b3ac56bbe68283ac1166884e88bd12efe7396dc640369daf3dcb5\": container with ID starting with 62fe9beeb36b3ac56bbe68283ac1166884e88bd12efe7396dc640369daf3dcb5 not found: ID does not exist" Mar 18 13:16:56.662669 master-0 kubenswrapper[7146]: I0318 13:16:56.662657 7146 scope.go:117] "RemoveContainer" containerID="79d04f4871a241f9be5403756ec9bc9fffb125a7eb22092e57838a02fa67798c" Mar 18 13:16:56.663046 master-0 kubenswrapper[7146]: E0318 13:16:56.663021 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79d04f4871a241f9be5403756ec9bc9fffb125a7eb22092e57838a02fa67798c\": container with ID starting with 79d04f4871a241f9be5403756ec9bc9fffb125a7eb22092e57838a02fa67798c not found: ID does not exist" containerID="79d04f4871a241f9be5403756ec9bc9fffb125a7eb22092e57838a02fa67798c" Mar 18 13:16:56.663150 master-0 kubenswrapper[7146]: I0318 13:16:56.663127 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79d04f4871a241f9be5403756ec9bc9fffb125a7eb22092e57838a02fa67798c"} err="failed to get container status \"79d04f4871a241f9be5403756ec9bc9fffb125a7eb22092e57838a02fa67798c\": rpc error: code = NotFound desc = could not find container \"79d04f4871a241f9be5403756ec9bc9fffb125a7eb22092e57838a02fa67798c\": container with ID starting with 79d04f4871a241f9be5403756ec9bc9fffb125a7eb22092e57838a02fa67798c not found: ID does not exist" Mar 18 13:16:56.663213 master-0 kubenswrapper[7146]: I0318 13:16:56.663202 7146 scope.go:117] "RemoveContainer" containerID="82e68aa3646d78dccb543b630e55b612be0fbbc4dcea9fa843c34bed76f82c4d" Mar 18 13:16:56.663518 master-0 kubenswrapper[7146]: E0318 13:16:56.663494 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82e68aa3646d78dccb543b630e55b612be0fbbc4dcea9fa843c34bed76f82c4d\": container with ID starting with 82e68aa3646d78dccb543b630e55b612be0fbbc4dcea9fa843c34bed76f82c4d not found: ID does not exist" containerID="82e68aa3646d78dccb543b630e55b612be0fbbc4dcea9fa843c34bed76f82c4d" Mar 18 13:16:56.663518 master-0 kubenswrapper[7146]: I0318 13:16:56.663514 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82e68aa3646d78dccb543b630e55b612be0fbbc4dcea9fa843c34bed76f82c4d"} err="failed to get container status \"82e68aa3646d78dccb543b630e55b612be0fbbc4dcea9fa843c34bed76f82c4d\": rpc error: code = NotFound desc = could not find container \"82e68aa3646d78dccb543b630e55b612be0fbbc4dcea9fa843c34bed76f82c4d\": container with ID starting with 82e68aa3646d78dccb543b630e55b612be0fbbc4dcea9fa843c34bed76f82c4d not found: ID does not exist" Mar 18 13:16:56.663630 master-0 kubenswrapper[7146]: I0318 13:16:56.663528 7146 scope.go:117] "RemoveContainer" containerID="323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a" Mar 18 13:16:56.663788 master-0 kubenswrapper[7146]: E0318 13:16:56.663760 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a\": container with ID starting with 323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a not found: ID does not exist" containerID="323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a" Mar 18 13:16:56.663854 master-0 kubenswrapper[7146]: I0318 13:16:56.663793 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a"} err="failed to get container status \"323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a\": rpc error: code = NotFound desc = could not find container \"323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a\": container with ID starting with 323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a not found: ID does not exist" Mar 18 13:16:56.663854 master-0 kubenswrapper[7146]: I0318 13:16:56.663811 7146 scope.go:117] "RemoveContainer" containerID="8a562044d18e998152ec295ff2940b94453d06068d629f03893f3bd11dac494f" Mar 18 13:16:56.664097 master-0 kubenswrapper[7146]: I0318 13:16:56.664079 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a562044d18e998152ec295ff2940b94453d06068d629f03893f3bd11dac494f"} err="failed to get container status \"8a562044d18e998152ec295ff2940b94453d06068d629f03893f3bd11dac494f\": rpc error: code = NotFound desc = could not find container \"8a562044d18e998152ec295ff2940b94453d06068d629f03893f3bd11dac494f\": container with ID starting with 8a562044d18e998152ec295ff2940b94453d06068d629f03893f3bd11dac494f not found: ID does not exist" Mar 18 13:16:56.664210 master-0 kubenswrapper[7146]: I0318 13:16:56.664191 7146 scope.go:117] "RemoveContainer" containerID="9b07e126c556fda409310ce3f4ade174ec8553d71974efdb020b209f4f4c150c" Mar 18 13:16:56.664522 master-0 kubenswrapper[7146]: I0318 13:16:56.664493 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b07e126c556fda409310ce3f4ade174ec8553d71974efdb020b209f4f4c150c"} err="failed to get container status \"9b07e126c556fda409310ce3f4ade174ec8553d71974efdb020b209f4f4c150c\": rpc error: code = NotFound desc = could not find container \"9b07e126c556fda409310ce3f4ade174ec8553d71974efdb020b209f4f4c150c\": container with ID starting with 9b07e126c556fda409310ce3f4ade174ec8553d71974efdb020b209f4f4c150c not found: ID does not exist" Mar 18 13:16:56.664603 master-0 kubenswrapper[7146]: I0318 13:16:56.664522 7146 scope.go:117] "RemoveContainer" containerID="a549d429b07d4ad9ab9bd2e9ea8b69fcd59fa7572d07446c525cdda08c760cbc" Mar 18 13:16:56.664884 master-0 kubenswrapper[7146]: I0318 13:16:56.664864 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a549d429b07d4ad9ab9bd2e9ea8b69fcd59fa7572d07446c525cdda08c760cbc"} err="failed to get container status \"a549d429b07d4ad9ab9bd2e9ea8b69fcd59fa7572d07446c525cdda08c760cbc\": rpc error: code = NotFound desc = could not find container \"a549d429b07d4ad9ab9bd2e9ea8b69fcd59fa7572d07446c525cdda08c760cbc\": container with ID starting with a549d429b07d4ad9ab9bd2e9ea8b69fcd59fa7572d07446c525cdda08c760cbc not found: ID does not exist" Mar 18 13:16:56.664973 master-0 kubenswrapper[7146]: I0318 13:16:56.664962 7146 scope.go:117] "RemoveContainer" containerID="a34f9c4245670ed076e7d03ac590e3edd882ed0cb3f0c4cb78b2d6e573c86dd6" Mar 18 13:16:56.665297 master-0 kubenswrapper[7146]: I0318 13:16:56.665281 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a34f9c4245670ed076e7d03ac590e3edd882ed0cb3f0c4cb78b2d6e573c86dd6"} err="failed to get container status \"a34f9c4245670ed076e7d03ac590e3edd882ed0cb3f0c4cb78b2d6e573c86dd6\": rpc error: code = NotFound desc = could not find container \"a34f9c4245670ed076e7d03ac590e3edd882ed0cb3f0c4cb78b2d6e573c86dd6\": container with ID starting with a34f9c4245670ed076e7d03ac590e3edd882ed0cb3f0c4cb78b2d6e573c86dd6 not found: ID does not exist" Mar 18 13:16:56.665373 master-0 kubenswrapper[7146]: I0318 13:16:56.665362 7146 scope.go:117] "RemoveContainer" containerID="62fe9beeb36b3ac56bbe68283ac1166884e88bd12efe7396dc640369daf3dcb5" Mar 18 13:16:56.665679 master-0 kubenswrapper[7146]: I0318 13:16:56.665641 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62fe9beeb36b3ac56bbe68283ac1166884e88bd12efe7396dc640369daf3dcb5"} err="failed to get container status \"62fe9beeb36b3ac56bbe68283ac1166884e88bd12efe7396dc640369daf3dcb5\": rpc error: code = NotFound desc = could not find container \"62fe9beeb36b3ac56bbe68283ac1166884e88bd12efe7396dc640369daf3dcb5\": container with ID starting with 62fe9beeb36b3ac56bbe68283ac1166884e88bd12efe7396dc640369daf3dcb5 not found: ID does not exist" Mar 18 13:16:56.665679 master-0 kubenswrapper[7146]: I0318 13:16:56.665670 7146 scope.go:117] "RemoveContainer" containerID="79d04f4871a241f9be5403756ec9bc9fffb125a7eb22092e57838a02fa67798c" Mar 18 13:16:56.665994 master-0 kubenswrapper[7146]: I0318 13:16:56.665960 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79d04f4871a241f9be5403756ec9bc9fffb125a7eb22092e57838a02fa67798c"} err="failed to get container status \"79d04f4871a241f9be5403756ec9bc9fffb125a7eb22092e57838a02fa67798c\": rpc error: code = NotFound desc = could not find container \"79d04f4871a241f9be5403756ec9bc9fffb125a7eb22092e57838a02fa67798c\": container with ID starting with 79d04f4871a241f9be5403756ec9bc9fffb125a7eb22092e57838a02fa67798c not found: ID does not exist" Mar 18 13:16:56.665994 master-0 kubenswrapper[7146]: I0318 13:16:56.665987 7146 scope.go:117] "RemoveContainer" containerID="82e68aa3646d78dccb543b630e55b612be0fbbc4dcea9fa843c34bed76f82c4d" Mar 18 13:16:56.666263 master-0 kubenswrapper[7146]: I0318 13:16:56.666225 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82e68aa3646d78dccb543b630e55b612be0fbbc4dcea9fa843c34bed76f82c4d"} err="failed to get container status \"82e68aa3646d78dccb543b630e55b612be0fbbc4dcea9fa843c34bed76f82c4d\": rpc error: code = NotFound desc = could not find container \"82e68aa3646d78dccb543b630e55b612be0fbbc4dcea9fa843c34bed76f82c4d\": container with ID starting with 82e68aa3646d78dccb543b630e55b612be0fbbc4dcea9fa843c34bed76f82c4d not found: ID does not exist" Mar 18 13:16:56.666263 master-0 kubenswrapper[7146]: I0318 13:16:56.666253 7146 scope.go:117] "RemoveContainer" containerID="323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a" Mar 18 13:16:56.666489 master-0 kubenswrapper[7146]: I0318 13:16:56.666464 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a"} err="failed to get container status \"323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a\": rpc error: code = NotFound desc = could not find container \"323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a\": container with ID starting with 323c894407d8140f4289f06455d5830882c42acb7ce19cf8ce045b3f0773e40a not found: ID does not exist" Mar 18 13:16:57.366220 master-0 kubenswrapper[7146]: I0318 13:16:57.366175 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b4ed170d527099878cb5fdd508a2fb" path="/var/lib/kubelet/pods/24b4ed170d527099878cb5fdd508a2fb/volumes" Mar 18 13:16:57.429308 master-0 kubenswrapper[7146]: I0318 13:16:57.429275 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-mptsw_956513bf-3b98-4b0d-aca7-ccc3fdf8ae12/kube-multus-additional-cni-plugins/0.log" Mar 18 13:16:57.429495 master-0 kubenswrapper[7146]: I0318 13:16:57.429335 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:57.558682 master-0 kubenswrapper[7146]: I0318 13:16:57.558638 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-mptsw_956513bf-3b98-4b0d-aca7-ccc3fdf8ae12/kube-multus-additional-cni-plugins/0.log" Mar 18 13:16:57.559384 master-0 kubenswrapper[7146]: I0318 13:16:57.559194 7146 generic.go:334] "Generic (PLEG): container finished" podID="956513bf-3b98-4b0d-aca7-ccc3fdf8ae12" containerID="30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1" exitCode=137 Mar 18 13:16:57.559384 master-0 kubenswrapper[7146]: I0318 13:16:57.559266 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" event={"ID":"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12","Type":"ContainerDied","Data":"30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1"} Mar 18 13:16:57.559384 master-0 kubenswrapper[7146]: I0318 13:16:57.559296 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" event={"ID":"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12","Type":"ContainerDied","Data":"fd6b0b032151d2881a4a54074951594cc7f0bc7d221bf627ffe89c45750bb858"} Mar 18 13:16:57.559384 master-0 kubenswrapper[7146]: I0318 13:16:57.559314 7146 scope.go:117] "RemoveContainer" containerID="30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1" Mar 18 13:16:57.559384 master-0 kubenswrapper[7146]: I0318 13:16:57.559325 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" Mar 18 13:16:57.579317 master-0 kubenswrapper[7146]: I0318 13:16:57.579272 7146 scope.go:117] "RemoveContainer" containerID="30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1" Mar 18 13:16:57.579945 master-0 kubenswrapper[7146]: E0318 13:16:57.579878 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1\": container with ID starting with 30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1 not found: ID does not exist" containerID="30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1" Mar 18 13:16:57.580115 master-0 kubenswrapper[7146]: I0318 13:16:57.580053 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1"} err="failed to get container status \"30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1\": rpc error: code = NotFound desc = could not find container \"30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1\": container with ID starting with 30cf2da04d76ff10df0dd04500dd413d2f4a74f98d64eb9dad05492639e319e1 not found: ID does not exist" Mar 18 13:16:57.609785 master-0 kubenswrapper[7146]: I0318 13:16:57.609716 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwhq2\" (UniqueName: \"kubernetes.io/projected/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-kube-api-access-wwhq2\") pod \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\" (UID: \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\") " Mar 18 13:16:57.610019 master-0 kubenswrapper[7146]: I0318 13:16:57.609983 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-cni-sysctl-allowlist\") pod \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\" (UID: \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\") " Mar 18 13:16:57.610120 master-0 kubenswrapper[7146]: I0318 13:16:57.610087 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-tuning-conf-dir\") pod \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\" (UID: \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\") " Mar 18 13:16:57.610273 master-0 kubenswrapper[7146]: I0318 13:16:57.610238 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-ready\") pod \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\" (UID: \"956513bf-3b98-4b0d-aca7-ccc3fdf8ae12\") " Mar 18 13:16:57.610348 master-0 kubenswrapper[7146]: I0318 13:16:57.610208 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "956513bf-3b98-4b0d-aca7-ccc3fdf8ae12" (UID: "956513bf-3b98-4b0d-aca7-ccc3fdf8ae12"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:16:57.610445 master-0 kubenswrapper[7146]: I0318 13:16:57.610394 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "956513bf-3b98-4b0d-aca7-ccc3fdf8ae12" (UID: "956513bf-3b98-4b0d-aca7-ccc3fdf8ae12"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:16:57.610705 master-0 kubenswrapper[7146]: I0318 13:16:57.610650 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-ready" (OuterVolumeSpecName: "ready") pod "956513bf-3b98-4b0d-aca7-ccc3fdf8ae12" (UID: "956513bf-3b98-4b0d-aca7-ccc3fdf8ae12"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:16:57.610796 master-0 kubenswrapper[7146]: I0318 13:16:57.610753 7146 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:57.610840 master-0 kubenswrapper[7146]: I0318 13:16:57.610808 7146 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:57.614254 master-0 kubenswrapper[7146]: I0318 13:16:57.614222 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-kube-api-access-wwhq2" (OuterVolumeSpecName: "kube-api-access-wwhq2") pod "956513bf-3b98-4b0d-aca7-ccc3fdf8ae12" (UID: "956513bf-3b98-4b0d-aca7-ccc3fdf8ae12"). InnerVolumeSpecName "kube-api-access-wwhq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:16:57.712152 master-0 kubenswrapper[7146]: I0318 13:16:57.712053 7146 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-ready\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:57.712152 master-0 kubenswrapper[7146]: I0318 13:16:57.712121 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwhq2\" (UniqueName: \"kubernetes.io/projected/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12-kube-api-access-wwhq2\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:59.748642 master-0 kubenswrapper[7146]: E0318 13:16:59.748505 7146 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189df1e30c847c27 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:24b4ed170d527099878cb5fdd508a2fb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Killing,Message:Stopping container etcd-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:16:25.723722791 +0000 UTC m=+494.531940162,LastTimestamp:2026-03-18 13:16:25.723722791 +0000 UTC m=+494.531940162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:17:03.712412 master-0 kubenswrapper[7146]: E0318 13:17:03.712315 7146 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:17:05.357964 master-0 kubenswrapper[7146]: I0318 13:17:05.357796 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 13:17:05.389487 master-0 kubenswrapper[7146]: I0318 13:17:05.389456 7146 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4d85aa7f-24d0-4deb-b57c-a3b530072b49" Mar 18 13:17:05.389676 master-0 kubenswrapper[7146]: I0318 13:17:05.389665 7146 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4d85aa7f-24d0-4deb-b57c-a3b530072b49" Mar 18 13:17:08.637607 master-0 kubenswrapper[7146]: I0318 13:17:08.637544 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_2fca2c29-3791-43b8-97f1-a9a6d58ec92d/installer/0.log" Mar 18 13:17:08.637607 master-0 kubenswrapper[7146]: I0318 13:17:08.637596 7146 generic.go:334] "Generic (PLEG): container finished" podID="2fca2c29-3791-43b8-97f1-a9a6d58ec92d" containerID="e194112a7651927c16369879335d3ba30bda7302ae714dc813e610c582b27c4a" exitCode=1 Mar 18 13:17:08.637607 master-0 kubenswrapper[7146]: I0318 13:17:08.637627 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"2fca2c29-3791-43b8-97f1-a9a6d58ec92d","Type":"ContainerDied","Data":"e194112a7651927c16369879335d3ba30bda7302ae714dc813e610c582b27c4a"} Mar 18 13:17:09.913657 master-0 kubenswrapper[7146]: I0318 13:17:09.913571 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_2fca2c29-3791-43b8-97f1-a9a6d58ec92d/installer/0.log" Mar 18 13:17:09.913657 master-0 kubenswrapper[7146]: I0318 13:17:09.913638 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 13:17:10.102115 master-0 kubenswrapper[7146]: I0318 13:17:10.102052 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-kubelet-dir\") pod \"2fca2c29-3791-43b8-97f1-a9a6d58ec92d\" (UID: \"2fca2c29-3791-43b8-97f1-a9a6d58ec92d\") " Mar 18 13:17:10.102344 master-0 kubenswrapper[7146]: I0318 13:17:10.102195 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-kube-api-access\") pod \"2fca2c29-3791-43b8-97f1-a9a6d58ec92d\" (UID: \"2fca2c29-3791-43b8-97f1-a9a6d58ec92d\") " Mar 18 13:17:10.102344 master-0 kubenswrapper[7146]: I0318 13:17:10.102203 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2fca2c29-3791-43b8-97f1-a9a6d58ec92d" (UID: "2fca2c29-3791-43b8-97f1-a9a6d58ec92d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:17:10.102344 master-0 kubenswrapper[7146]: I0318 13:17:10.102249 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-var-lock\") pod \"2fca2c29-3791-43b8-97f1-a9a6d58ec92d\" (UID: \"2fca2c29-3791-43b8-97f1-a9a6d58ec92d\") " Mar 18 13:17:10.102344 master-0 kubenswrapper[7146]: I0318 13:17:10.102303 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-var-lock" (OuterVolumeSpecName: "var-lock") pod "2fca2c29-3791-43b8-97f1-a9a6d58ec92d" (UID: "2fca2c29-3791-43b8-97f1-a9a6d58ec92d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:17:10.102977 master-0 kubenswrapper[7146]: I0318 13:17:10.102904 7146 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:17:10.103047 master-0 kubenswrapper[7146]: I0318 13:17:10.102991 7146 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:17:10.105455 master-0 kubenswrapper[7146]: I0318 13:17:10.105391 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2fca2c29-3791-43b8-97f1-a9a6d58ec92d" (UID: "2fca2c29-3791-43b8-97f1-a9a6d58ec92d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:17:10.204728 master-0 kubenswrapper[7146]: I0318 13:17:10.204672 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2fca2c29-3791-43b8-97f1-a9a6d58ec92d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:17:10.651902 master-0 kubenswrapper[7146]: I0318 13:17:10.651852 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_2fca2c29-3791-43b8-97f1-a9a6d58ec92d/installer/0.log" Mar 18 13:17:10.651902 master-0 kubenswrapper[7146]: I0318 13:17:10.651920 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"2fca2c29-3791-43b8-97f1-a9a6d58ec92d","Type":"ContainerDied","Data":"abd6d9cd064ffc49598289235ab6b846f24e69f6bc0b898e367dc9ec6a8b35e1"} Mar 18 13:17:10.652266 master-0 kubenswrapper[7146]: I0318 13:17:10.651981 7146 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abd6d9cd064ffc49598289235ab6b846f24e69f6bc0b898e367dc9ec6a8b35e1" Mar 18 13:17:10.652266 master-0 kubenswrapper[7146]: I0318 13:17:10.652044 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 13:17:13.712992 master-0 kubenswrapper[7146]: E0318 13:17:13.712818 7146 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:17:18.697957 master-0 kubenswrapper[7146]: I0318 13:17:18.697864 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-xcbtb_eb8907fd-35dd-452a-8032-f2f95a6e553a/approver/1.log" Mar 18 13:17:18.699430 master-0 kubenswrapper[7146]: I0318 13:17:18.698710 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-xcbtb_eb8907fd-35dd-452a-8032-f2f95a6e553a/approver/0.log" Mar 18 13:17:18.699430 master-0 kubenswrapper[7146]: I0318 13:17:18.699334 7146 generic.go:334] "Generic (PLEG): container finished" podID="eb8907fd-35dd-452a-8032-f2f95a6e553a" containerID="0e76cffa571436858041a59dc3cb08e8f19f5b925a773925f3208413a9e44b8f" exitCode=1 Mar 18 13:17:18.699430 master-0 kubenswrapper[7146]: I0318 13:17:18.699382 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-xcbtb" event={"ID":"eb8907fd-35dd-452a-8032-f2f95a6e553a","Type":"ContainerDied","Data":"0e76cffa571436858041a59dc3cb08e8f19f5b925a773925f3208413a9e44b8f"} Mar 18 13:17:18.699685 master-0 kubenswrapper[7146]: I0318 13:17:18.699453 7146 scope.go:117] "RemoveContainer" containerID="42763f2e1945cdd442dd148f3b0766793cb775dcfcb2d6ede73f97fce1315683" Mar 18 13:17:18.700321 master-0 kubenswrapper[7146]: I0318 13:17:18.700286 7146 scope.go:117] "RemoveContainer" containerID="0e76cffa571436858041a59dc3cb08e8f19f5b925a773925f3208413a9e44b8f" Mar 18 13:17:18.700699 master-0 kubenswrapper[7146]: E0318 13:17:18.700651 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-xcbtb_openshift-network-node-identity(eb8907fd-35dd-452a-8032-f2f95a6e553a)\"" pod="openshift-network-node-identity/network-node-identity-xcbtb" podUID="eb8907fd-35dd-452a-8032-f2f95a6e553a" Mar 18 13:17:19.720431 master-0 kubenswrapper[7146]: I0318 13:17:19.720363 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-xcbtb_eb8907fd-35dd-452a-8032-f2f95a6e553a/approver/1.log" Mar 18 13:17:20.727736 master-0 kubenswrapper[7146]: I0318 13:17:20.727667 7146 generic.go:334] "Generic (PLEG): container finished" podID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerID="31665db688945aa094b6891895ce672425d61018bce7b516675fdc844fb9eb7e" exitCode=0 Mar 18 13:17:20.728270 master-0 kubenswrapper[7146]: I0318 13:17:20.727760 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" event={"ID":"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf","Type":"ContainerDied","Data":"31665db688945aa094b6891895ce672425d61018bce7b516675fdc844fb9eb7e"} Mar 18 13:17:20.728270 master-0 kubenswrapper[7146]: I0318 13:17:20.727854 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" event={"ID":"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf","Type":"ContainerStarted","Data":"f8a3caa2163025eca93eda965504b4cc6018d77ba7a2820b766d5ff6236b73e8"} Mar 18 13:17:20.728270 master-0 kubenswrapper[7146]: I0318 13:17:20.727881 7146 scope.go:117] "RemoveContainer" containerID="e11b0d0a2f2fcef8559280a2714debec3210ea7873ccaa447460e5bbe4ca1669" Mar 18 13:17:21.325929 master-0 kubenswrapper[7146]: I0318 13:17:21.325854 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:17:21.328446 master-0 kubenswrapper[7146]: I0318 13:17:21.328397 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:21.328446 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:21.328446 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:21.328446 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:21.328589 master-0 kubenswrapper[7146]: I0318 13:17:21.328476 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:22.328443 master-0 kubenswrapper[7146]: I0318 13:17:22.328354 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:22.328443 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:22.328443 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:22.328443 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:22.329204 master-0 kubenswrapper[7146]: I0318 13:17:22.328450 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:23.329368 master-0 kubenswrapper[7146]: I0318 13:17:23.329314 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:23.329368 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:23.329368 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:23.329368 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:23.330149 master-0 kubenswrapper[7146]: I0318 13:17:23.330113 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:23.713819 master-0 kubenswrapper[7146]: E0318 13:17:23.713719 7146 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:17:23.713819 master-0 kubenswrapper[7146]: I0318 13:17:23.713796 7146 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 13:17:24.325485 master-0 kubenswrapper[7146]: I0318 13:17:24.325404 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:17:24.327008 master-0 kubenswrapper[7146]: I0318 13:17:24.326972 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:24.327008 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:24.327008 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:24.327008 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:24.327198 master-0 kubenswrapper[7146]: I0318 13:17:24.327021 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:25.328286 master-0 kubenswrapper[7146]: I0318 13:17:25.328218 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:25.328286 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:25.328286 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:25.328286 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:25.328912 master-0 kubenswrapper[7146]: I0318 13:17:25.328294 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:26.329062 master-0 kubenswrapper[7146]: I0318 13:17:26.328962 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:26.329062 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:26.329062 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:26.329062 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:26.329062 master-0 kubenswrapper[7146]: I0318 13:17:26.329043 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:27.320306 master-0 kubenswrapper[7146]: I0318 13:17:27.320202 7146 status_manager.go:851] "Failed to get status for pod" podUID="24b4ed170d527099878cb5fdd508a2fb" pod="openshift-etcd/etcd-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" Mar 18 13:17:27.327855 master-0 kubenswrapper[7146]: I0318 13:17:27.327791 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:27.327855 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:27.327855 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:27.327855 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:27.328171 master-0 kubenswrapper[7146]: I0318 13:17:27.327872 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:28.328204 master-0 kubenswrapper[7146]: I0318 13:17:28.328126 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:28.328204 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:28.328204 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:28.328204 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:28.328204 master-0 kubenswrapper[7146]: I0318 13:17:28.328191 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:29.328584 master-0 kubenswrapper[7146]: I0318 13:17:29.328545 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:29.328584 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:29.328584 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:29.328584 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:29.329240 master-0 kubenswrapper[7146]: I0318 13:17:29.329208 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:30.327312 master-0 kubenswrapper[7146]: I0318 13:17:30.327263 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:30.327312 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:30.327312 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:30.327312 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:30.327600 master-0 kubenswrapper[7146]: I0318 13:17:30.327321 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:31.328818 master-0 kubenswrapper[7146]: I0318 13:17:31.328728 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:31.328818 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:31.328818 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:31.328818 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:31.328818 master-0 kubenswrapper[7146]: I0318 13:17:31.328801 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:32.328456 master-0 kubenswrapper[7146]: I0318 13:17:32.328275 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:32.328456 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:32.328456 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:32.328456 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:32.328791 master-0 kubenswrapper[7146]: I0318 13:17:32.328377 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:33.093013 master-0 kubenswrapper[7146]: E0318 13:17:33.090264 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:17:23Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:17:23Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:17:23Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:17:23Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:17:33.328278 master-0 kubenswrapper[7146]: I0318 13:17:33.328112 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:33.328278 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:33.328278 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:33.328278 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:33.328278 master-0 kubenswrapper[7146]: I0318 13:17:33.328195 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:33.714731 master-0 kubenswrapper[7146]: E0318 13:17:33.714365 7146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 18 13:17:33.752084 master-0 kubenswrapper[7146]: E0318 13:17:33.751869 7146 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Mar 18 13:17:33.752084 master-0 kubenswrapper[7146]: &Event{ObjectMeta:{router-default-7dcf5569b5-mtnzv.189df1a27673ca2d openshift-ingress 11133 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-7dcf5569b5-mtnzv,UID:ab9ef7c0-f9f2-4048-9857-06ab48f36ecf,APIVersion:v1,ResourceVersion:10656,FieldPath:spec.containers{router},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Mar 18 13:17:33.752084 master-0 kubenswrapper[7146]: body: [-]backend-http failed: reason withheld Mar 18 13:17:33.752084 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:33.752084 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:33.752084 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:33.752084 master-0 kubenswrapper[7146]: Mar 18 13:17:33.752084 master-0 kubenswrapper[7146]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:11:48 +0000 UTC,LastTimestamp:2026-03-18 13:16:26.327606614 +0000 UTC m=+495.135823975,Count:232,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 18 13:17:33.752084 master-0 kubenswrapper[7146]: > Mar 18 13:17:34.328475 master-0 kubenswrapper[7146]: I0318 13:17:34.328387 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:34.328475 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:34.328475 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:34.328475 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:34.328475 master-0 kubenswrapper[7146]: I0318 13:17:34.328476 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:34.358400 master-0 kubenswrapper[7146]: I0318 13:17:34.358323 7146 scope.go:117] "RemoveContainer" containerID="0e76cffa571436858041a59dc3cb08e8f19f5b925a773925f3208413a9e44b8f" Mar 18 13:17:34.808614 master-0 kubenswrapper[7146]: I0318 13:17:34.808554 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-xcbtb_eb8907fd-35dd-452a-8032-f2f95a6e553a/approver/1.log" Mar 18 13:17:34.809043 master-0 kubenswrapper[7146]: I0318 13:17:34.808976 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-xcbtb" event={"ID":"eb8907fd-35dd-452a-8032-f2f95a6e553a","Type":"ContainerStarted","Data":"4cdb8f6a7d491b97d1841a4242b8b4af974f6508a7ab21693a14fb2fdcee78d0"} Mar 18 13:17:35.328387 master-0 kubenswrapper[7146]: I0318 13:17:35.328310 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:35.328387 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:35.328387 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:35.328387 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:35.329681 master-0 kubenswrapper[7146]: I0318 13:17:35.328429 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:36.327510 master-0 kubenswrapper[7146]: I0318 13:17:36.327441 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:36.327510 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:36.327510 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:36.327510 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:36.327510 master-0 kubenswrapper[7146]: I0318 13:17:36.327495 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:36.822527 master-0 kubenswrapper[7146]: I0318 13:17:36.822463 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/3.log" Mar 18 13:17:36.823428 master-0 kubenswrapper[7146]: I0318 13:17:36.823372 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/2.log" Mar 18 13:17:36.823856 master-0 kubenswrapper[7146]: I0318 13:17:36.823796 7146 generic.go:334] "Generic (PLEG): container finished" podID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" containerID="c1cbee78b9223d65a91dcbbb50864bbde5c7ce89aa7a2abeca708031563e11b9" exitCode=1 Mar 18 13:17:36.823856 master-0 kubenswrapper[7146]: I0318 13:17:36.823850 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" event={"ID":"f2b92a53-0b61-4e1d-a306-f9a498e48b38","Type":"ContainerDied","Data":"c1cbee78b9223d65a91dcbbb50864bbde5c7ce89aa7a2abeca708031563e11b9"} Mar 18 13:17:36.824061 master-0 kubenswrapper[7146]: I0318 13:17:36.823894 7146 scope.go:117] "RemoveContainer" containerID="7c793255a5608311d981bf6038801d212aa1f98f8a9233aeb3861db6e4fc95b7" Mar 18 13:17:36.824647 master-0 kubenswrapper[7146]: I0318 13:17:36.824607 7146 scope.go:117] "RemoveContainer" containerID="c1cbee78b9223d65a91dcbbb50864bbde5c7ce89aa7a2abeca708031563e11b9" Mar 18 13:17:36.825082 master-0 kubenswrapper[7146]: E0318 13:17:36.824932 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-xwqsb_openshift-ingress-operator(f2b92a53-0b61-4e1d-a306-f9a498e48b38)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" podUID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" Mar 18 13:17:37.328656 master-0 kubenswrapper[7146]: I0318 13:17:37.328430 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:37.328656 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:37.328656 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:37.328656 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:37.328656 master-0 kubenswrapper[7146]: I0318 13:17:37.328505 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:37.832927 master-0 kubenswrapper[7146]: I0318 13:17:37.832874 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/3.log" Mar 18 13:17:38.328598 master-0 kubenswrapper[7146]: I0318 13:17:38.328308 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:38.328598 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:38.328598 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:38.328598 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:38.329221 master-0 kubenswrapper[7146]: I0318 13:17:38.328697 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:39.327119 master-0 kubenswrapper[7146]: I0318 13:17:39.327057 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:39.327119 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:39.327119 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:39.327119 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:39.328508 master-0 kubenswrapper[7146]: I0318 13:17:39.327125 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:39.392374 master-0 kubenswrapper[7146]: E0318 13:17:39.392296 7146 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 13:17:39.393079 master-0 kubenswrapper[7146]: I0318 13:17:39.392923 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 13:17:39.409033 master-0 kubenswrapper[7146]: W0318 13:17:39.408916 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod094204df314fe45bd5af12ca1b4622bb.slice/crio-107bbfc4822b298b178da7c2027a8844c3612176c3e5d6fcb31db24eadcd1790 WatchSource:0}: Error finding container 107bbfc4822b298b178da7c2027a8844c3612176c3e5d6fcb31db24eadcd1790: Status 404 returned error can't find the container with id 107bbfc4822b298b178da7c2027a8844c3612176c3e5d6fcb31db24eadcd1790 Mar 18 13:17:39.845602 master-0 kubenswrapper[7146]: I0318 13:17:39.845481 7146 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="200e8cc7b998c12ebab49945348ad20ad11d9b022c6433d242aed2cda0e0a774" exitCode=0 Mar 18 13:17:39.845602 master-0 kubenswrapper[7146]: I0318 13:17:39.845538 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"200e8cc7b998c12ebab49945348ad20ad11d9b022c6433d242aed2cda0e0a774"} Mar 18 13:17:39.845602 master-0 kubenswrapper[7146]: I0318 13:17:39.845575 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"107bbfc4822b298b178da7c2027a8844c3612176c3e5d6fcb31db24eadcd1790"} Mar 18 13:17:39.845917 master-0 kubenswrapper[7146]: I0318 13:17:39.845893 7146 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4d85aa7f-24d0-4deb-b57c-a3b530072b49" Mar 18 13:17:39.845917 master-0 kubenswrapper[7146]: I0318 13:17:39.845913 7146 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4d85aa7f-24d0-4deb-b57c-a3b530072b49" Mar 18 13:17:40.328379 master-0 kubenswrapper[7146]: I0318 13:17:40.328319 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:40.328379 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:40.328379 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:40.328379 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:40.329183 master-0 kubenswrapper[7146]: I0318 13:17:40.328389 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:41.328302 master-0 kubenswrapper[7146]: I0318 13:17:41.328231 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:41.328302 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:41.328302 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:41.328302 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:41.329059 master-0 kubenswrapper[7146]: I0318 13:17:41.328314 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:42.328956 master-0 kubenswrapper[7146]: I0318 13:17:42.328880 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:42.328956 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:42.328956 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:42.328956 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:42.329529 master-0 kubenswrapper[7146]: I0318 13:17:42.329001 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:43.091534 master-0 kubenswrapper[7146]: E0318 13:17:43.091233 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:17:43.327600 master-0 kubenswrapper[7146]: I0318 13:17:43.327531 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:43.327600 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:43.327600 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:43.327600 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:43.327921 master-0 kubenswrapper[7146]: I0318 13:17:43.327611 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:43.916495 master-0 kubenswrapper[7146]: E0318 13:17:43.916414 7146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 18 13:17:44.329515 master-0 kubenswrapper[7146]: I0318 13:17:44.329375 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:44.329515 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:44.329515 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:44.329515 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:44.329515 master-0 kubenswrapper[7146]: I0318 13:17:44.329480 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:45.328191 master-0 kubenswrapper[7146]: I0318 13:17:45.328139 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:45.328191 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:45.328191 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:45.328191 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:45.328767 master-0 kubenswrapper[7146]: I0318 13:17:45.328212 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:46.328297 master-0 kubenswrapper[7146]: I0318 13:17:46.328235 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:46.328297 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:46.328297 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:46.328297 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:46.328892 master-0 kubenswrapper[7146]: I0318 13:17:46.328320 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:47.328586 master-0 kubenswrapper[7146]: I0318 13:17:47.328546 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:47.328586 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:47.328586 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:47.328586 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:47.329292 master-0 kubenswrapper[7146]: I0318 13:17:47.329259 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:48.327549 master-0 kubenswrapper[7146]: I0318 13:17:48.327456 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:48.327549 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:48.327549 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:48.327549 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:48.328116 master-0 kubenswrapper[7146]: I0318 13:17:48.327558 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:49.328620 master-0 kubenswrapper[7146]: I0318 13:17:49.328541 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:49.328620 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:49.328620 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:49.328620 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:49.329793 master-0 kubenswrapper[7146]: I0318 13:17:49.328626 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:49.358231 master-0 kubenswrapper[7146]: I0318 13:17:49.358191 7146 scope.go:117] "RemoveContainer" containerID="c1cbee78b9223d65a91dcbbb50864bbde5c7ce89aa7a2abeca708031563e11b9" Mar 18 13:17:49.358680 master-0 kubenswrapper[7146]: E0318 13:17:49.358660 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-xwqsb_openshift-ingress-operator(f2b92a53-0b61-4e1d-a306-f9a498e48b38)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" podUID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" Mar 18 13:17:50.328596 master-0 kubenswrapper[7146]: I0318 13:17:50.328543 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:50.328596 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:50.328596 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:50.328596 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:50.329276 master-0 kubenswrapper[7146]: I0318 13:17:50.328629 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:51.327666 master-0 kubenswrapper[7146]: I0318 13:17:51.327597 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:51.327666 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:51.327666 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:51.327666 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:51.328005 master-0 kubenswrapper[7146]: I0318 13:17:51.327673 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:52.328540 master-0 kubenswrapper[7146]: I0318 13:17:52.328471 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:52.328540 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:52.328540 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:52.328540 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:52.329112 master-0 kubenswrapper[7146]: I0318 13:17:52.328579 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:53.091655 master-0 kubenswrapper[7146]: E0318 13:17:53.091549 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:17:53.329066 master-0 kubenswrapper[7146]: I0318 13:17:53.328930 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:53.329066 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:53.329066 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:53.329066 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:53.329066 master-0 kubenswrapper[7146]: I0318 13:17:53.329060 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:54.318546 master-0 kubenswrapper[7146]: E0318 13:17:54.318426 7146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 18 13:17:54.328488 master-0 kubenswrapper[7146]: I0318 13:17:54.328398 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:54.328488 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:54.328488 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:54.328488 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:54.328872 master-0 kubenswrapper[7146]: I0318 13:17:54.328507 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:55.327603 master-0 kubenswrapper[7146]: I0318 13:17:55.327546 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:55.327603 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:55.327603 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:55.327603 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:55.328245 master-0 kubenswrapper[7146]: I0318 13:17:55.327616 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:56.327902 master-0 kubenswrapper[7146]: I0318 13:17:56.327833 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:56.327902 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:56.327902 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:56.327902 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:56.329070 master-0 kubenswrapper[7146]: I0318 13:17:56.329025 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:57.327919 master-0 kubenswrapper[7146]: I0318 13:17:57.327846 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:57.327919 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:57.327919 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:57.327919 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:57.329049 master-0 kubenswrapper[7146]: I0318 13:17:57.327970 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:58.327618 master-0 kubenswrapper[7146]: I0318 13:17:58.327516 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:58.327618 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:58.327618 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:58.327618 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:58.328526 master-0 kubenswrapper[7146]: I0318 13:17:58.328226 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:59.327915 master-0 kubenswrapper[7146]: I0318 13:17:59.327834 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:59.327915 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:17:59.327915 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:17:59.327915 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:17:59.327915 master-0 kubenswrapper[7146]: I0318 13:17:59.327900 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:00.328379 master-0 kubenswrapper[7146]: I0318 13:18:00.328182 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:00.328379 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:00.328379 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:00.328379 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:00.328976 master-0 kubenswrapper[7146]: I0318 13:18:00.328378 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:01.328106 master-0 kubenswrapper[7146]: I0318 13:18:01.328037 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:01.328106 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:01.328106 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:01.328106 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:01.328106 master-0 kubenswrapper[7146]: I0318 13:18:01.328100 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:02.327283 master-0 kubenswrapper[7146]: I0318 13:18:02.327249 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:02.327283 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:02.327283 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:02.327283 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:02.327721 master-0 kubenswrapper[7146]: I0318 13:18:02.327696 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:03.092498 master-0 kubenswrapper[7146]: E0318 13:18:03.092443 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:18:03.327439 master-0 kubenswrapper[7146]: I0318 13:18:03.327308 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:03.327439 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:03.327439 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:03.327439 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:03.328015 master-0 kubenswrapper[7146]: I0318 13:18:03.327500 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:03.358649 master-0 kubenswrapper[7146]: I0318 13:18:03.358518 7146 scope.go:117] "RemoveContainer" containerID="c1cbee78b9223d65a91dcbbb50864bbde5c7ce89aa7a2abeca708031563e11b9" Mar 18 13:18:03.358834 master-0 kubenswrapper[7146]: E0318 13:18:03.358766 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-xwqsb_openshift-ingress-operator(f2b92a53-0b61-4e1d-a306-f9a498e48b38)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" podUID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" Mar 18 13:18:04.327858 master-0 kubenswrapper[7146]: I0318 13:18:04.327778 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:04.327858 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:04.327858 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:04.327858 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:04.328473 master-0 kubenswrapper[7146]: I0318 13:18:04.327882 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:05.120102 master-0 kubenswrapper[7146]: E0318 13:18:05.119992 7146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 18 13:18:05.327047 master-0 kubenswrapper[7146]: I0318 13:18:05.326916 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:05.327047 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:05.327047 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:05.327047 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:05.327350 master-0 kubenswrapper[7146]: I0318 13:18:05.327070 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:06.328359 master-0 kubenswrapper[7146]: I0318 13:18:06.328262 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:06.328359 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:06.328359 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:06.328359 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:06.328960 master-0 kubenswrapper[7146]: I0318 13:18:06.328388 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:07.327123 master-0 kubenswrapper[7146]: I0318 13:18:07.326974 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:07.327123 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:07.327123 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:07.327123 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:07.327123 master-0 kubenswrapper[7146]: I0318 13:18:07.327056 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:07.755309 master-0 kubenswrapper[7146]: E0318 13:18:07.755087 7146 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cni-sysctl-allowlist-ds-mptsw.189df1e36b95eb0f openshift-multus 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-multus,Name:cni-sysctl-allowlist-ds-mptsw,UID:956513bf-3b98-4b0d-aca7-ccc3fdf8ae12,APIVersion:v1,ResourceVersion:11891,FieldPath:spec.containers{kube-multus-additional-cni-plugins},},Reason:Killing,Message:Stopping container kube-multus-additional-cni-plugins,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:16:27.318700815 +0000 UTC m=+496.126918176,LastTimestamp:2026-03-18 13:16:27.318700815 +0000 UTC m=+496.126918176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:18:08.328035 master-0 kubenswrapper[7146]: I0318 13:18:08.327984 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:08.328035 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:08.328035 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:08.328035 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:08.328335 master-0 kubenswrapper[7146]: I0318 13:18:08.328053 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:09.327693 master-0 kubenswrapper[7146]: I0318 13:18:09.327563 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:09.327693 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:09.327693 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:09.327693 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:09.327693 master-0 kubenswrapper[7146]: I0318 13:18:09.327636 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:10.328230 master-0 kubenswrapper[7146]: I0318 13:18:10.328176 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:10.328230 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:10.328230 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:10.328230 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:10.329039 master-0 kubenswrapper[7146]: I0318 13:18:10.328262 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:11.027008 master-0 kubenswrapper[7146]: I0318 13:18:11.026960 7146 generic.go:334] "Generic (PLEG): container finished" podID="330df925-8429-4b96-9bfe-caa017c21afa" containerID="25a6724684f01c1f8f810c77d2f577ea86053b8875f39a3ebd8958705d59785e" exitCode=0 Mar 18 13:18:11.027230 master-0 kubenswrapper[7146]: I0318 13:18:11.027023 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" event={"ID":"330df925-8429-4b96-9bfe-caa017c21afa","Type":"ContainerDied","Data":"25a6724684f01c1f8f810c77d2f577ea86053b8875f39a3ebd8958705d59785e"} Mar 18 13:18:11.027230 master-0 kubenswrapper[7146]: I0318 13:18:11.027167 7146 scope.go:117] "RemoveContainer" containerID="620704d7c61dd7667c0b9ebbc637d5a4615acb926bb8c0bad681bcafb14bec19" Mar 18 13:18:11.027763 master-0 kubenswrapper[7146]: I0318 13:18:11.027745 7146 scope.go:117] "RemoveContainer" containerID="25a6724684f01c1f8f810c77d2f577ea86053b8875f39a3ebd8958705d59785e" Mar 18 13:18:11.028045 master-0 kubenswrapper[7146]: E0318 13:18:11.027985 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-4v84b_openshift-marketplace(330df925-8429-4b96-9bfe-caa017c21afa)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" podUID="330df925-8429-4b96-9bfe-caa017c21afa" Mar 18 13:18:11.327012 master-0 kubenswrapper[7146]: I0318 13:18:11.326877 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:11.327012 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:11.327012 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:11.327012 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:11.327012 master-0 kubenswrapper[7146]: I0318 13:18:11.326957 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:12.328079 master-0 kubenswrapper[7146]: I0318 13:18:12.327931 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:12.328079 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:12.328079 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:12.328079 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:12.328079 master-0 kubenswrapper[7146]: I0318 13:18:12.328057 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:13.093442 master-0 kubenswrapper[7146]: E0318 13:18:13.093383 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:18:13.093442 master-0 kubenswrapper[7146]: E0318 13:18:13.093429 7146 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 13:18:13.328112 master-0 kubenswrapper[7146]: I0318 13:18:13.328054 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:13.328112 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:13.328112 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:13.328112 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:13.329448 master-0 kubenswrapper[7146]: I0318 13:18:13.328126 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:13.848373 master-0 kubenswrapper[7146]: E0318 13:18:13.848291 7146 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 13:18:14.327895 master-0 kubenswrapper[7146]: I0318 13:18:14.327827 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:14.327895 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:14.327895 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:14.327895 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:14.328242 master-0 kubenswrapper[7146]: I0318 13:18:14.327907 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:14.459266 master-0 kubenswrapper[7146]: I0318 13:18:14.459162 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:18:14.460263 master-0 kubenswrapper[7146]: I0318 13:18:14.460245 7146 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:18:14.461334 master-0 kubenswrapper[7146]: I0318 13:18:14.461263 7146 scope.go:117] "RemoveContainer" containerID="25a6724684f01c1f8f810c77d2f577ea86053b8875f39a3ebd8958705d59785e" Mar 18 13:18:14.461925 master-0 kubenswrapper[7146]: E0318 13:18:14.461887 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-4v84b_openshift-marketplace(330df925-8429-4b96-9bfe-caa017c21afa)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" podUID="330df925-8429-4b96-9bfe-caa017c21afa" Mar 18 13:18:15.051532 master-0 kubenswrapper[7146]: I0318 13:18:15.051483 7146 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="f2eaa8545a70bd93c6fda5c0d0d68dc69b5076035140e52196a502a53a980e02" exitCode=0 Mar 18 13:18:15.051763 master-0 kubenswrapper[7146]: I0318 13:18:15.051578 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"f2eaa8545a70bd93c6fda5c0d0d68dc69b5076035140e52196a502a53a980e02"} Mar 18 13:18:15.051988 master-0 kubenswrapper[7146]: I0318 13:18:15.051966 7146 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4d85aa7f-24d0-4deb-b57c-a3b530072b49" Mar 18 13:18:15.052040 master-0 kubenswrapper[7146]: I0318 13:18:15.051989 7146 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4d85aa7f-24d0-4deb-b57c-a3b530072b49" Mar 18 13:18:15.052083 master-0 kubenswrapper[7146]: I0318 13:18:15.052061 7146 scope.go:117] "RemoveContainer" containerID="25a6724684f01c1f8f810c77d2f577ea86053b8875f39a3ebd8958705d59785e" Mar 18 13:18:15.052323 master-0 kubenswrapper[7146]: E0318 13:18:15.052285 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-4v84b_openshift-marketplace(330df925-8429-4b96-9bfe-caa017c21afa)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" podUID="330df925-8429-4b96-9bfe-caa017c21afa" Mar 18 13:18:15.329161 master-0 kubenswrapper[7146]: I0318 13:18:15.329039 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:15.329161 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:15.329161 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:15.329161 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:15.329452 master-0 kubenswrapper[7146]: I0318 13:18:15.329163 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:16.059203 master-0 kubenswrapper[7146]: I0318 13:18:16.059155 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-ncjbh_d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/config-sync-controllers/0.log" Mar 18 13:18:16.059675 master-0 kubenswrapper[7146]: I0318 13:18:16.059598 7146 generic.go:334] "Generic (PLEG): container finished" podID="d3f208f9-e2e1-4fae-a47a-f58b722e0ad5" containerID="416f123fbbc7d637d66d383e9de461fd5b529d5d437df7cc58e7901b8e2c57aa" exitCode=1 Mar 18 13:18:16.059720 master-0 kubenswrapper[7146]: I0318 13:18:16.059660 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" event={"ID":"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5","Type":"ContainerDied","Data":"416f123fbbc7d637d66d383e9de461fd5b529d5d437df7cc58e7901b8e2c57aa"} Mar 18 13:18:16.060210 master-0 kubenswrapper[7146]: I0318 13:18:16.060178 7146 scope.go:117] "RemoveContainer" containerID="416f123fbbc7d637d66d383e9de461fd5b529d5d437df7cc58e7901b8e2c57aa" Mar 18 13:18:16.329141 master-0 kubenswrapper[7146]: I0318 13:18:16.328987 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:16.329141 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:16.329141 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:16.329141 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:16.329141 master-0 kubenswrapper[7146]: I0318 13:18:16.329082 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:16.358141 master-0 kubenswrapper[7146]: I0318 13:18:16.358089 7146 scope.go:117] "RemoveContainer" containerID="c1cbee78b9223d65a91dcbbb50864bbde5c7ce89aa7a2abeca708031563e11b9" Mar 18 13:18:16.721527 master-0 kubenswrapper[7146]: E0318 13:18:16.721440 7146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 18 13:18:17.075539 master-0 kubenswrapper[7146]: I0318 13:18:17.075420 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-ncjbh_d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/config-sync-controllers/0.log" Mar 18 13:18:17.076119 master-0 kubenswrapper[7146]: I0318 13:18:17.075848 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" event={"ID":"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5","Type":"ContainerStarted","Data":"fb561ff179fa983c83edae71820ed3518b9ee7790d08f319f3c55c3f33736ba9"} Mar 18 13:18:17.078039 master-0 kubenswrapper[7146]: I0318 13:18:17.078000 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/3.log" Mar 18 13:18:17.078452 master-0 kubenswrapper[7146]: I0318 13:18:17.078424 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" event={"ID":"f2b92a53-0b61-4e1d-a306-f9a498e48b38","Type":"ContainerStarted","Data":"5b4c84f643c308b4da498c0b191698ddaa3218b818a04462557cc1d1c093013c"} Mar 18 13:18:17.327587 master-0 kubenswrapper[7146]: I0318 13:18:17.327434 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:17.327587 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:17.327587 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:17.327587 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:17.327587 master-0 kubenswrapper[7146]: I0318 13:18:17.327485 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:18.328177 master-0 kubenswrapper[7146]: I0318 13:18:18.328133 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:18.328177 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:18.328177 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:18.328177 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:18.329011 master-0 kubenswrapper[7146]: I0318 13:18:18.328977 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:19.327308 master-0 kubenswrapper[7146]: I0318 13:18:19.327186 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:19.327308 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:19.327308 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:19.327308 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:19.327308 master-0 kubenswrapper[7146]: I0318 13:18:19.327266 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:20.327225 master-0 kubenswrapper[7146]: I0318 13:18:20.327137 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:20.327225 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:20.327225 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:20.327225 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:20.327806 master-0 kubenswrapper[7146]: I0318 13:18:20.327241 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:21.328293 master-0 kubenswrapper[7146]: I0318 13:18:21.328205 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:21.328293 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:21.328293 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:21.328293 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:21.328293 master-0 kubenswrapper[7146]: I0318 13:18:21.328290 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:22.328161 master-0 kubenswrapper[7146]: I0318 13:18:22.328113 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:22.328161 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:22.328161 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:22.328161 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:22.329138 master-0 kubenswrapper[7146]: I0318 13:18:22.329068 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:23.328826 master-0 kubenswrapper[7146]: I0318 13:18:23.328696 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:23.328826 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:23.328826 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:23.328826 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:23.329539 master-0 kubenswrapper[7146]: I0318 13:18:23.328850 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:24.328842 master-0 kubenswrapper[7146]: I0318 13:18:24.328779 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:24.328842 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:24.328842 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:24.328842 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:24.329932 master-0 kubenswrapper[7146]: I0318 13:18:24.329887 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:25.328061 master-0 kubenswrapper[7146]: I0318 13:18:25.327983 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:25.328061 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:25.328061 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:25.328061 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:25.328431 master-0 kubenswrapper[7146]: I0318 13:18:25.328084 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:26.328882 master-0 kubenswrapper[7146]: I0318 13:18:26.328816 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:26.328882 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:26.328882 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:26.328882 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:26.329650 master-0 kubenswrapper[7146]: I0318 13:18:26.328884 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:27.139379 master-0 kubenswrapper[7146]: I0318 13:18:27.139324 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-4r95z_baeb6380-95e4-4e10-9798-e1e22f20bade/manager/0.log" Mar 18 13:18:27.139573 master-0 kubenswrapper[7146]: I0318 13:18:27.139380 7146 generic.go:334] "Generic (PLEG): container finished" podID="baeb6380-95e4-4e10-9798-e1e22f20bade" containerID="c8d0e68fce468a6cbf7a9e25b4e7afd1002b3dc75deb637dce883f568f47b361" exitCode=1 Mar 18 13:18:27.139573 master-0 kubenswrapper[7146]: I0318 13:18:27.139433 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" event={"ID":"baeb6380-95e4-4e10-9798-e1e22f20bade","Type":"ContainerDied","Data":"c8d0e68fce468a6cbf7a9e25b4e7afd1002b3dc75deb637dce883f568f47b361"} Mar 18 13:18:27.140289 master-0 kubenswrapper[7146]: I0318 13:18:27.140239 7146 scope.go:117] "RemoveContainer" containerID="c8d0e68fce468a6cbf7a9e25b4e7afd1002b3dc75deb637dce883f568f47b361" Mar 18 13:18:27.141865 master-0 kubenswrapper[7146]: I0318 13:18:27.141805 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-8jrfz_234a5a6c-3790-49d0-b1e7-86f81048d96a/manager/0.log" Mar 18 13:18:27.142301 master-0 kubenswrapper[7146]: I0318 13:18:27.142266 7146 generic.go:334] "Generic (PLEG): container finished" podID="234a5a6c-3790-49d0-b1e7-86f81048d96a" containerID="e421e24f0032092d372aa8567bf62089ec16fcc76e9db4714f59ae66d20632af" exitCode=1 Mar 18 13:18:27.142401 master-0 kubenswrapper[7146]: I0318 13:18:27.142305 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" event={"ID":"234a5a6c-3790-49d0-b1e7-86f81048d96a","Type":"ContainerDied","Data":"e421e24f0032092d372aa8567bf62089ec16fcc76e9db4714f59ae66d20632af"} Mar 18 13:18:27.142976 master-0 kubenswrapper[7146]: I0318 13:18:27.142952 7146 scope.go:117] "RemoveContainer" containerID="e421e24f0032092d372aa8567bf62089ec16fcc76e9db4714f59ae66d20632af" Mar 18 13:18:27.145885 master-0 kubenswrapper[7146]: I0318 13:18:27.145841 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-wkw7f_1ad93612-ab12-4b30-984f-119e1b924a84/snapshot-controller/0.log" Mar 18 13:18:27.145974 master-0 kubenswrapper[7146]: I0318 13:18:27.145909 7146 generic.go:334] "Generic (PLEG): container finished" podID="1ad93612-ab12-4b30-984f-119e1b924a84" containerID="ddc5fc9bc5738b2fb623ab1efb2af56221fe48e8d53ef7d28db78ae72c1b278b" exitCode=1 Mar 18 13:18:27.146020 master-0 kubenswrapper[7146]: I0318 13:18:27.145970 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" event={"ID":"1ad93612-ab12-4b30-984f-119e1b924a84","Type":"ContainerDied","Data":"ddc5fc9bc5738b2fb623ab1efb2af56221fe48e8d53ef7d28db78ae72c1b278b"} Mar 18 13:18:27.146433 master-0 kubenswrapper[7146]: I0318 13:18:27.146391 7146 scope.go:117] "RemoveContainer" containerID="ddc5fc9bc5738b2fb623ab1efb2af56221fe48e8d53ef7d28db78ae72c1b278b" Mar 18 13:18:27.322506 master-0 kubenswrapper[7146]: I0318 13:18:27.322439 7146 status_manager.go:851] "Failed to get status for pod" podUID="956513bf-3b98-4b0d-aca7-ccc3fdf8ae12" pod="openshift-multus/cni-sysctl-allowlist-ds-mptsw" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods cni-sysctl-allowlist-ds-mptsw)" Mar 18 13:18:27.327494 master-0 kubenswrapper[7146]: I0318 13:18:27.327140 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:27.327494 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:27.327494 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:27.327494 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:27.327494 master-0 kubenswrapper[7146]: I0318 13:18:27.327194 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:27.358384 master-0 kubenswrapper[7146]: I0318 13:18:27.358316 7146 scope.go:117] "RemoveContainer" containerID="25a6724684f01c1f8f810c77d2f577ea86053b8875f39a3ebd8958705d59785e" Mar 18 13:18:28.155619 master-0 kubenswrapper[7146]: I0318 13:18:28.155550 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-4r95z_baeb6380-95e4-4e10-9798-e1e22f20bade/manager/0.log" Mar 18 13:18:28.155848 master-0 kubenswrapper[7146]: I0318 13:18:28.155644 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" event={"ID":"baeb6380-95e4-4e10-9798-e1e22f20bade","Type":"ContainerStarted","Data":"32e87d43910393f25b21e5cd9408e937e5ca0ef17fd126b0edaf8c2a1835e76f"} Mar 18 13:18:28.156302 master-0 kubenswrapper[7146]: I0318 13:18:28.156267 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:18:28.158438 master-0 kubenswrapper[7146]: I0318 13:18:28.158401 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" event={"ID":"330df925-8429-4b96-9bfe-caa017c21afa","Type":"ContainerStarted","Data":"e30f309e7afca3b6899900c8f4ecc733b21a46a02f9fbf3c43c8c30a1aae6b4d"} Mar 18 13:18:28.158805 master-0 kubenswrapper[7146]: I0318 13:18:28.158732 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:18:28.161783 master-0 kubenswrapper[7146]: I0318 13:18:28.161678 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:18:28.163010 master-0 kubenswrapper[7146]: I0318 13:18:28.162859 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-8jrfz_234a5a6c-3790-49d0-b1e7-86f81048d96a/manager/0.log" Mar 18 13:18:28.163364 master-0 kubenswrapper[7146]: I0318 13:18:28.163292 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" event={"ID":"234a5a6c-3790-49d0-b1e7-86f81048d96a","Type":"ContainerStarted","Data":"492a7e5293a240a0113afb5a6cc6996b29f6f2a5f8bb4b8613f00005079d391f"} Mar 18 13:18:28.163779 master-0 kubenswrapper[7146]: I0318 13:18:28.163750 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:18:28.165361 master-0 kubenswrapper[7146]: I0318 13:18:28.165320 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-wkw7f_1ad93612-ab12-4b30-984f-119e1b924a84/snapshot-controller/0.log" Mar 18 13:18:28.165424 master-0 kubenswrapper[7146]: I0318 13:18:28.165388 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" event={"ID":"1ad93612-ab12-4b30-984f-119e1b924a84","Type":"ContainerStarted","Data":"9dec36619b4869559cc6b1399627ba288f0fc94abd2c5f064cedd0d6fd90fbe9"} Mar 18 13:18:28.327081 master-0 kubenswrapper[7146]: I0318 13:18:28.326927 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:28.327081 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:28.327081 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:28.327081 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:28.327081 master-0 kubenswrapper[7146]: I0318 13:18:28.327014 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:29.172322 master-0 kubenswrapper[7146]: I0318 13:18:29.172280 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-ncjbh_d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/config-sync-controllers/0.log" Mar 18 13:18:29.173258 master-0 kubenswrapper[7146]: I0318 13:18:29.173234 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-ncjbh_d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/cluster-cloud-controller-manager/0.log" Mar 18 13:18:29.173369 master-0 kubenswrapper[7146]: I0318 13:18:29.173277 7146 generic.go:334] "Generic (PLEG): container finished" podID="d3f208f9-e2e1-4fae-a47a-f58b722e0ad5" containerID="59e43a5798785560fb9b5499b32da91edb8ae46a4589c047f8415fd258612a45" exitCode=1 Mar 18 13:18:29.173435 master-0 kubenswrapper[7146]: I0318 13:18:29.173388 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" event={"ID":"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5","Type":"ContainerDied","Data":"59e43a5798785560fb9b5499b32da91edb8ae46a4589c047f8415fd258612a45"} Mar 18 13:18:29.174142 master-0 kubenswrapper[7146]: I0318 13:18:29.174077 7146 scope.go:117] "RemoveContainer" containerID="59e43a5798785560fb9b5499b32da91edb8ae46a4589c047f8415fd258612a45" Mar 18 13:18:29.328110 master-0 kubenswrapper[7146]: I0318 13:18:29.328023 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:29.328110 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:29.328110 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:29.328110 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:29.328364 master-0 kubenswrapper[7146]: I0318 13:18:29.328158 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:29.923255 master-0 kubenswrapper[7146]: E0318 13:18:29.923112 7146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 18 13:18:30.182128 master-0 kubenswrapper[7146]: I0318 13:18:30.182021 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-ncjbh_d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/config-sync-controllers/0.log" Mar 18 13:18:30.182602 master-0 kubenswrapper[7146]: I0318 13:18:30.182435 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-ncjbh_d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/cluster-cloud-controller-manager/0.log" Mar 18 13:18:30.182602 master-0 kubenswrapper[7146]: I0318 13:18:30.182514 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" event={"ID":"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5","Type":"ContainerStarted","Data":"463a3edad24e0a2a183a88617ce6345fd7ba78e737e6f2bff8c221765cdfb444"} Mar 18 13:18:30.327140 master-0 kubenswrapper[7146]: I0318 13:18:30.327070 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:30.327140 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:30.327140 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:30.327140 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:30.327521 master-0 kubenswrapper[7146]: I0318 13:18:30.327155 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:31.328363 master-0 kubenswrapper[7146]: I0318 13:18:31.328303 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:31.328363 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:31.328363 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:31.328363 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:31.329186 master-0 kubenswrapper[7146]: I0318 13:18:31.328384 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:32.103921 master-0 kubenswrapper[7146]: I0318 13:18:32.103845 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:18:32.328857 master-0 kubenswrapper[7146]: I0318 13:18:32.328796 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:32.328857 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:32.328857 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:32.328857 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:32.330077 master-0 kubenswrapper[7146]: I0318 13:18:32.330031 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:33.327271 master-0 kubenswrapper[7146]: I0318 13:18:33.327200 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:33.327271 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:33.327271 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:33.327271 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:33.327542 master-0 kubenswrapper[7146]: I0318 13:18:33.327282 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:33.389595 master-0 kubenswrapper[7146]: E0318 13:18:33.389514 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:18:23Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:18:23Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:18:23Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:18:23Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (patch nodes master-0)" Mar 18 13:18:34.327223 master-0 kubenswrapper[7146]: I0318 13:18:34.327154 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:34.327223 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:34.327223 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:34.327223 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:34.327499 master-0 kubenswrapper[7146]: I0318 13:18:34.327234 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:35.327157 master-0 kubenswrapper[7146]: I0318 13:18:35.327097 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:35.327157 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:35.327157 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:35.327157 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:35.327932 master-0 kubenswrapper[7146]: I0318 13:18:35.327173 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:36.328573 master-0 kubenswrapper[7146]: I0318 13:18:36.328441 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:36.328573 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:36.328573 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:36.328573 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:36.328573 master-0 kubenswrapper[7146]: I0318 13:18:36.328524 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:37.328028 master-0 kubenswrapper[7146]: I0318 13:18:37.327890 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:37.328028 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:37.328028 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:37.328028 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:37.328028 master-0 kubenswrapper[7146]: I0318 13:18:37.327963 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:38.328524 master-0 kubenswrapper[7146]: I0318 13:18:38.328434 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:38.328524 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:38.328524 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:38.328524 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:38.328524 master-0 kubenswrapper[7146]: I0318 13:18:38.328537 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:39.328343 master-0 kubenswrapper[7146]: I0318 13:18:39.328258 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:39.328343 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:39.328343 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:39.328343 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:39.328906 master-0 kubenswrapper[7146]: I0318 13:18:39.328352 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:39.903319 master-0 kubenswrapper[7146]: I0318 13:18:39.903186 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:18:40.327207 master-0 kubenswrapper[7146]: I0318 13:18:40.327150 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:40.327207 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:40.327207 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:40.327207 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:40.327584 master-0 kubenswrapper[7146]: I0318 13:18:40.327216 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:41.327541 master-0 kubenswrapper[7146]: I0318 13:18:41.327475 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:41.327541 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:41.327541 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:41.327541 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:41.328202 master-0 kubenswrapper[7146]: I0318 13:18:41.327559 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:41.761036 master-0 kubenswrapper[7146]: E0318 13:18:41.760906 7146 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cni-sysctl-allowlist-ds-mptsw.189df1e4eeb7727b openshift-multus 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-multus,Name:cni-sysctl-allowlist-ds-mptsw,UID:956513bf-3b98-4b0d-aca7-ccc3fdf8ae12,APIVersion:v1,ResourceVersion:11891,FieldPath:spec.containers{kube-multus-additional-cni-plugins},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:16:33.813680763 +0000 UTC m=+502.621898134,LastTimestamp:2026-03-18 13:16:33.813680763 +0000 UTC m=+502.621898134,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:18:42.328035 master-0 kubenswrapper[7146]: I0318 13:18:42.327907 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:42.328035 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:42.328035 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:42.328035 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:42.328035 master-0 kubenswrapper[7146]: I0318 13:18:42.327989 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:43.328219 master-0 kubenswrapper[7146]: I0318 13:18:43.328102 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:43.328219 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:43.328219 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:43.328219 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:43.328219 master-0 kubenswrapper[7146]: I0318 13:18:43.328203 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:43.390539 master-0 kubenswrapper[7146]: E0318 13:18:43.390472 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:18:44.329103 master-0 kubenswrapper[7146]: I0318 13:18:44.329031 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:44.329103 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:44.329103 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:44.329103 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:44.329675 master-0 kubenswrapper[7146]: I0318 13:18:44.329103 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:45.327869 master-0 kubenswrapper[7146]: I0318 13:18:45.327808 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:45.327869 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:45.327869 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:45.327869 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:45.328178 master-0 kubenswrapper[7146]: I0318 13:18:45.327883 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:46.325063 master-0 kubenswrapper[7146]: E0318 13:18:46.324917 7146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 13:18:46.327239 master-0 kubenswrapper[7146]: I0318 13:18:46.327177 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:46.327239 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:46.327239 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:46.327239 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:46.327446 master-0 kubenswrapper[7146]: I0318 13:18:46.327270 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:47.327410 master-0 kubenswrapper[7146]: I0318 13:18:47.327324 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:47.327410 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:47.327410 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:47.327410 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:47.328434 master-0 kubenswrapper[7146]: I0318 13:18:47.327420 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:48.328337 master-0 kubenswrapper[7146]: I0318 13:18:48.328211 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:48.328337 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:48.328337 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:48.328337 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:48.328337 master-0 kubenswrapper[7146]: I0318 13:18:48.328299 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:49.055235 master-0 kubenswrapper[7146]: E0318 13:18:49.055148 7146 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 13:18:49.327399 master-0 kubenswrapper[7146]: I0318 13:18:49.327274 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:49.327399 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:49.327399 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:49.327399 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:49.327399 master-0 kubenswrapper[7146]: I0318 13:18:49.327327 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:49.329868 master-0 kubenswrapper[7146]: I0318 13:18:49.329817 7146 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="e371dab0b58bcbcc5b1907ef685fdfadda0906d8d24523dfbc948bf72419b864" exitCode=0 Mar 18 13:18:49.329868 master-0 kubenswrapper[7146]: I0318 13:18:49.329877 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"e371dab0b58bcbcc5b1907ef685fdfadda0906d8d24523dfbc948bf72419b864"} Mar 18 13:18:49.330420 master-0 kubenswrapper[7146]: I0318 13:18:49.330209 7146 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4d85aa7f-24d0-4deb-b57c-a3b530072b49" Mar 18 13:18:49.330420 master-0 kubenswrapper[7146]: I0318 13:18:49.330223 7146 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4d85aa7f-24d0-4deb-b57c-a3b530072b49" Mar 18 13:18:50.328301 master-0 kubenswrapper[7146]: I0318 13:18:50.328221 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:50.328301 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:50.328301 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:50.328301 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:50.328301 master-0 kubenswrapper[7146]: I0318 13:18:50.328302 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:51.328506 master-0 kubenswrapper[7146]: I0318 13:18:51.328416 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:51.328506 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:51.328506 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:51.328506 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:51.329186 master-0 kubenswrapper[7146]: I0318 13:18:51.328566 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:52.328175 master-0 kubenswrapper[7146]: I0318 13:18:52.328105 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:52.328175 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:52.328175 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:52.328175 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:52.328485 master-0 kubenswrapper[7146]: I0318 13:18:52.328204 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:53.329381 master-0 kubenswrapper[7146]: I0318 13:18:53.329324 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:53.329381 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:53.329381 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:53.329381 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:53.330078 master-0 kubenswrapper[7146]: I0318 13:18:53.329389 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:53.391454 master-0 kubenswrapper[7146]: E0318 13:18:53.391392 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:18:54.327919 master-0 kubenswrapper[7146]: I0318 13:18:54.327865 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:54.327919 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:54.327919 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:54.327919 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:54.328258 master-0 kubenswrapper[7146]: I0318 13:18:54.327926 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:55.327085 master-0 kubenswrapper[7146]: I0318 13:18:55.327028 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:55.327085 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:55.327085 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:55.327085 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:55.327592 master-0 kubenswrapper[7146]: I0318 13:18:55.327100 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:56.327440 master-0 kubenswrapper[7146]: I0318 13:18:56.327375 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:56.327440 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:56.327440 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:56.327440 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:56.328010 master-0 kubenswrapper[7146]: I0318 13:18:56.327450 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:57.328448 master-0 kubenswrapper[7146]: I0318 13:18:57.328392 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:57.328448 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:57.328448 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:57.328448 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:57.329058 master-0 kubenswrapper[7146]: I0318 13:18:57.328468 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:57.379082 master-0 kubenswrapper[7146]: I0318 13:18:57.379031 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-wkw7f_1ad93612-ab12-4b30-984f-119e1b924a84/snapshot-controller/1.log" Mar 18 13:18:57.380010 master-0 kubenswrapper[7146]: I0318 13:18:57.379967 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-wkw7f_1ad93612-ab12-4b30-984f-119e1b924a84/snapshot-controller/0.log" Mar 18 13:18:57.380129 master-0 kubenswrapper[7146]: I0318 13:18:57.380027 7146 generic.go:334] "Generic (PLEG): container finished" podID="1ad93612-ab12-4b30-984f-119e1b924a84" containerID="9dec36619b4869559cc6b1399627ba288f0fc94abd2c5f064cedd0d6fd90fbe9" exitCode=1 Mar 18 13:18:57.380129 master-0 kubenswrapper[7146]: I0318 13:18:57.380067 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" event={"ID":"1ad93612-ab12-4b30-984f-119e1b924a84","Type":"ContainerDied","Data":"9dec36619b4869559cc6b1399627ba288f0fc94abd2c5f064cedd0d6fd90fbe9"} Mar 18 13:18:57.380129 master-0 kubenswrapper[7146]: I0318 13:18:57.380108 7146 scope.go:117] "RemoveContainer" containerID="ddc5fc9bc5738b2fb623ab1efb2af56221fe48e8d53ef7d28db78ae72c1b278b" Mar 18 13:18:57.380629 master-0 kubenswrapper[7146]: I0318 13:18:57.380592 7146 scope.go:117] "RemoveContainer" containerID="9dec36619b4869559cc6b1399627ba288f0fc94abd2c5f064cedd0d6fd90fbe9" Mar 18 13:18:57.380895 master-0 kubenswrapper[7146]: E0318 13:18:57.380831 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-wkw7f_openshift-cluster-storage-operator(1ad93612-ab12-4b30-984f-119e1b924a84)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" podUID="1ad93612-ab12-4b30-984f-119e1b924a84" Mar 18 13:18:58.327613 master-0 kubenswrapper[7146]: I0318 13:18:58.327526 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:58.327613 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:58.327613 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:58.327613 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:58.327613 master-0 kubenswrapper[7146]: I0318 13:18:58.327607 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:58.387563 master-0 kubenswrapper[7146]: I0318 13:18:58.387487 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-wkw7f_1ad93612-ab12-4b30-984f-119e1b924a84/snapshot-controller/1.log" Mar 18 13:18:59.328915 master-0 kubenswrapper[7146]: I0318 13:18:59.328840 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:59.328915 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:18:59.328915 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:18:59.328915 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:18:59.329384 master-0 kubenswrapper[7146]: I0318 13:18:59.328927 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:00.327466 master-0 kubenswrapper[7146]: I0318 13:19:00.327357 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:00.327466 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:00.327466 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:00.327466 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:00.327466 master-0 kubenswrapper[7146]: I0318 13:19:00.327452 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:00.402545 master-0 kubenswrapper[7146]: I0318 13:19:00.402463 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-f8zc2_734f9f10-5bde-44d5-a831-021b93fd667d/machine-approver-controller/0.log" Mar 18 13:19:00.403308 master-0 kubenswrapper[7146]: I0318 13:19:00.403212 7146 generic.go:334] "Generic (PLEG): container finished" podID="734f9f10-5bde-44d5-a831-021b93fd667d" containerID="bf9efcefa6211001d8f08607f67b510663e50278def7ed0ac4963e0d3210e802" exitCode=255 Mar 18 13:19:00.403517 master-0 kubenswrapper[7146]: I0318 13:19:00.403287 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" event={"ID":"734f9f10-5bde-44d5-a831-021b93fd667d","Type":"ContainerDied","Data":"bf9efcefa6211001d8f08607f67b510663e50278def7ed0ac4963e0d3210e802"} Mar 18 13:19:00.404232 master-0 kubenswrapper[7146]: I0318 13:19:00.404188 7146 scope.go:117] "RemoveContainer" containerID="bf9efcefa6211001d8f08607f67b510663e50278def7ed0ac4963e0d3210e802" Mar 18 13:19:01.327820 master-0 kubenswrapper[7146]: I0318 13:19:01.327665 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:01.327820 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:01.327820 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:01.327820 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:01.327820 master-0 kubenswrapper[7146]: I0318 13:19:01.327771 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:01.416211 master-0 kubenswrapper[7146]: I0318 13:19:01.416122 7146 generic.go:334] "Generic (PLEG): container finished" podID="a5a93d05-3c8e-4666-9a55-d8f9e902db78" containerID="b10031bd90b55a9a696a81d72f5edb8059040095aa52e3160902d05b4a7cd6cf" exitCode=0 Mar 18 13:19:01.416433 master-0 kubenswrapper[7146]: I0318 13:19:01.416271 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" event={"ID":"a5a93d05-3c8e-4666-9a55-d8f9e902db78","Type":"ContainerDied","Data":"b10031bd90b55a9a696a81d72f5edb8059040095aa52e3160902d05b4a7cd6cf"} Mar 18 13:19:01.417316 master-0 kubenswrapper[7146]: I0318 13:19:01.417260 7146 scope.go:117] "RemoveContainer" containerID="b10031bd90b55a9a696a81d72f5edb8059040095aa52e3160902d05b4a7cd6cf" Mar 18 13:19:01.420071 master-0 kubenswrapper[7146]: I0318 13:19:01.420017 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-f8zc2_734f9f10-5bde-44d5-a831-021b93fd667d/machine-approver-controller/0.log" Mar 18 13:19:01.420685 master-0 kubenswrapper[7146]: I0318 13:19:01.420627 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" event={"ID":"734f9f10-5bde-44d5-a831-021b93fd667d","Type":"ContainerStarted","Data":"3906a3404d81617b4adaf8d12abe4a03c4c27819a894878ba00b5f91f7214fdf"} Mar 18 13:19:01.422355 master-0 kubenswrapper[7146]: I0318 13:19:01.422319 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-bjpp5_933a37fd-d76a-4f60-8dd8-301fb73daf42/control-plane-machine-set-operator/0.log" Mar 18 13:19:01.422435 master-0 kubenswrapper[7146]: I0318 13:19:01.422359 7146 generic.go:334] "Generic (PLEG): container finished" podID="933a37fd-d76a-4f60-8dd8-301fb73daf42" containerID="2442652c47cb11893c3b83d3fad2866d5f95d1a4285de57aa76d8638f0a3ca4c" exitCode=1 Mar 18 13:19:01.422435 master-0 kubenswrapper[7146]: I0318 13:19:01.422415 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5" event={"ID":"933a37fd-d76a-4f60-8dd8-301fb73daf42","Type":"ContainerDied","Data":"2442652c47cb11893c3b83d3fad2866d5f95d1a4285de57aa76d8638f0a3ca4c"} Mar 18 13:19:01.422717 master-0 kubenswrapper[7146]: I0318 13:19:01.422694 7146 scope.go:117] "RemoveContainer" containerID="2442652c47cb11893c3b83d3fad2866d5f95d1a4285de57aa76d8638f0a3ca4c" Mar 18 13:19:01.425695 master-0 kubenswrapper[7146]: I0318 13:19:01.425648 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-7w5g8_a01c92f5-7938-437d-8262-11598bd8023c/cluster-baremetal-operator/0.log" Mar 18 13:19:01.425790 master-0 kubenswrapper[7146]: I0318 13:19:01.425711 7146 generic.go:334] "Generic (PLEG): container finished" podID="a01c92f5-7938-437d-8262-11598bd8023c" containerID="3d9515777e1454e99e50b07c4bb4005cbf649f4fb0161a941555e68ab2bef68b" exitCode=1 Mar 18 13:19:01.426157 master-0 kubenswrapper[7146]: I0318 13:19:01.425809 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" event={"ID":"a01c92f5-7938-437d-8262-11598bd8023c","Type":"ContainerDied","Data":"3d9515777e1454e99e50b07c4bb4005cbf649f4fb0161a941555e68ab2bef68b"} Mar 18 13:19:01.426607 master-0 kubenswrapper[7146]: I0318 13:19:01.426555 7146 scope.go:117] "RemoveContainer" containerID="3d9515777e1454e99e50b07c4bb4005cbf649f4fb0161a941555e68ab2bef68b" Mar 18 13:19:01.428695 master-0 kubenswrapper[7146]: I0318 13:19:01.428619 7146 generic.go:334] "Generic (PLEG): container finished" podID="4bc77989-ecfc-4500-92a0-18c2b3b78408" containerID="da555fd9f47f4294570e6ad25c16548ca14ae9ec137f334d01bde47cd422dcf9" exitCode=0 Mar 18 13:19:01.428695 master-0 kubenswrapper[7146]: I0318 13:19:01.428678 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" event={"ID":"4bc77989-ecfc-4500-92a0-18c2b3b78408","Type":"ContainerDied","Data":"da555fd9f47f4294570e6ad25c16548ca14ae9ec137f334d01bde47cd422dcf9"} Mar 18 13:19:01.429629 master-0 kubenswrapper[7146]: I0318 13:19:01.429586 7146 scope.go:117] "RemoveContainer" containerID="da555fd9f47f4294570e6ad25c16548ca14ae9ec137f334d01bde47cd422dcf9" Mar 18 13:19:02.328364 master-0 kubenswrapper[7146]: I0318 13:19:02.328206 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:02.328364 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:02.328364 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:02.328364 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:02.328364 master-0 kubenswrapper[7146]: I0318 13:19:02.328313 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:02.438294 master-0 kubenswrapper[7146]: I0318 13:19:02.438185 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" event={"ID":"4bc77989-ecfc-4500-92a0-18c2b3b78408","Type":"ContainerStarted","Data":"9b8bcc361448e9ef35a7faecd8c6d2be61fe2414bd4deba09792fc607bb9ab49"} Mar 18 13:19:02.439748 master-0 kubenswrapper[7146]: I0318 13:19:02.439690 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" event={"ID":"a5a93d05-3c8e-4666-9a55-d8f9e902db78","Type":"ContainerStarted","Data":"673862c17b9e84a9d59c686af6c0f638cfa8ae15c58a6c7387f904f4b2566d48"} Mar 18 13:19:02.440085 master-0 kubenswrapper[7146]: I0318 13:19:02.440046 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:19:02.442218 master-0 kubenswrapper[7146]: I0318 13:19:02.442181 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-bjpp5_933a37fd-d76a-4f60-8dd8-301fb73daf42/control-plane-machine-set-operator/0.log" Mar 18 13:19:02.442354 master-0 kubenswrapper[7146]: I0318 13:19:02.442280 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5" event={"ID":"933a37fd-d76a-4f60-8dd8-301fb73daf42","Type":"ContainerStarted","Data":"a2376ae47af9cd37b6b522d0a7d4c1e7c497bc13bfadb1ac9f5e6804096642c7"} Mar 18 13:19:02.444011 master-0 kubenswrapper[7146]: I0318 13:19:02.443966 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:19:02.445484 master-0 kubenswrapper[7146]: I0318 13:19:02.445434 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-7w5g8_a01c92f5-7938-437d-8262-11598bd8023c/cluster-baremetal-operator/0.log" Mar 18 13:19:02.445592 master-0 kubenswrapper[7146]: I0318 13:19:02.445499 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" event={"ID":"a01c92f5-7938-437d-8262-11598bd8023c","Type":"ContainerStarted","Data":"d4c91e969faf5650da5d5727f2dfc66f398fbfef974094943a2e96586ef2e4ac"} Mar 18 13:19:03.326325 master-0 kubenswrapper[7146]: E0318 13:19:03.326182 7146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 18 13:19:03.327971 master-0 kubenswrapper[7146]: I0318 13:19:03.327910 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:03.327971 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:03.327971 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:03.327971 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:03.328119 master-0 kubenswrapper[7146]: I0318 13:19:03.328003 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:03.392395 master-0 kubenswrapper[7146]: E0318 13:19:03.392313 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:19:03.459986 master-0 kubenswrapper[7146]: I0318 13:19:03.459893 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager/0.log" Mar 18 13:19:03.459986 master-0 kubenswrapper[7146]: I0318 13:19:03.459983 7146 generic.go:334] "Generic (PLEG): container finished" podID="f88d0f62c0688ab1909dc97f30d381b9" containerID="1a3f1cc2c06b3716aaec57cfe182c6cc3f75f423059d28cf0ab2c58cba5e63fc" exitCode=0 Mar 18 13:19:03.460384 master-0 kubenswrapper[7146]: I0318 13:19:03.460058 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f88d0f62c0688ab1909dc97f30d381b9","Type":"ContainerDied","Data":"1a3f1cc2c06b3716aaec57cfe182c6cc3f75f423059d28cf0ab2c58cba5e63fc"} Mar 18 13:19:03.460841 master-0 kubenswrapper[7146]: I0318 13:19:03.460779 7146 scope.go:117] "RemoveContainer" containerID="1a3f1cc2c06b3716aaec57cfe182c6cc3f75f423059d28cf0ab2c58cba5e63fc" Mar 18 13:19:04.327898 master-0 kubenswrapper[7146]: I0318 13:19:04.327785 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:04.327898 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:04.327898 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:04.327898 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:04.328418 master-0 kubenswrapper[7146]: I0318 13:19:04.327998 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:04.470688 master-0 kubenswrapper[7146]: I0318 13:19:04.470628 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager/0.log" Mar 18 13:19:04.471285 master-0 kubenswrapper[7146]: I0318 13:19:04.470739 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f88d0f62c0688ab1909dc97f30d381b9","Type":"ContainerStarted","Data":"9271e995129475a25190e498db48c9b9068a25ebaba03a20db9f05d72bd81dd8"} Mar 18 13:19:05.328454 master-0 kubenswrapper[7146]: I0318 13:19:05.328363 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:05.328454 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:05.328454 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:05.328454 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:05.328882 master-0 kubenswrapper[7146]: I0318 13:19:05.328481 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:06.329110 master-0 kubenswrapper[7146]: I0318 13:19:06.329018 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:06.329110 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:06.329110 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:06.329110 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:06.329751 master-0 kubenswrapper[7146]: I0318 13:19:06.329111 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:07.328751 master-0 kubenswrapper[7146]: I0318 13:19:07.328650 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:07.328751 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:07.328751 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:07.328751 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:07.329136 master-0 kubenswrapper[7146]: I0318 13:19:07.328788 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:08.328561 master-0 kubenswrapper[7146]: I0318 13:19:08.328480 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:08.328561 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:08.328561 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:08.328561 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:08.328898 master-0 kubenswrapper[7146]: I0318 13:19:08.328568 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:09.328463 master-0 kubenswrapper[7146]: I0318 13:19:09.328374 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:09.328463 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:09.328463 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:09.328463 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:09.328463 master-0 kubenswrapper[7146]: I0318 13:19:09.328447 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:10.327744 master-0 kubenswrapper[7146]: I0318 13:19:10.327694 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:10.327744 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:10.327744 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:10.327744 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:10.328187 master-0 kubenswrapper[7146]: I0318 13:19:10.327749 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:11.328826 master-0 kubenswrapper[7146]: I0318 13:19:11.328733 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:11.328826 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:11.328826 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:11.328826 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:11.329633 master-0 kubenswrapper[7146]: I0318 13:19:11.328854 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:11.358824 master-0 kubenswrapper[7146]: I0318 13:19:11.358777 7146 scope.go:117] "RemoveContainer" containerID="9dec36619b4869559cc6b1399627ba288f0fc94abd2c5f064cedd0d6fd90fbe9" Mar 18 13:19:11.518069 master-0 kubenswrapper[7146]: I0318 13:19:11.518007 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-wkw7f_1ad93612-ab12-4b30-984f-119e1b924a84/snapshot-controller/1.log" Mar 18 13:19:11.518245 master-0 kubenswrapper[7146]: I0318 13:19:11.518073 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" event={"ID":"1ad93612-ab12-4b30-984f-119e1b924a84","Type":"ContainerStarted","Data":"35e1bd2871ee9933b0e98979a9f60d15197b30b9fcbe0f0644fec68cf9d194c1"} Mar 18 13:19:11.555919 master-0 kubenswrapper[7146]: I0318 13:19:11.555840 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:19:11.556141 master-0 kubenswrapper[7146]: I0318 13:19:11.556062 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:19:12.328258 master-0 kubenswrapper[7146]: I0318 13:19:12.328189 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:12.328258 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:12.328258 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:12.328258 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:12.328626 master-0 kubenswrapper[7146]: I0318 13:19:12.328278 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:13.327683 master-0 kubenswrapper[7146]: I0318 13:19:13.327577 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:13.327683 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:13.327683 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:13.327683 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:13.327683 master-0 kubenswrapper[7146]: I0318 13:19:13.327650 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:13.393159 master-0 kubenswrapper[7146]: E0318 13:19:13.393070 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:19:13.393159 master-0 kubenswrapper[7146]: E0318 13:19:13.393114 7146 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 13:19:14.328214 master-0 kubenswrapper[7146]: I0318 13:19:14.328142 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:14.328214 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:14.328214 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:14.328214 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:14.328214 master-0 kubenswrapper[7146]: I0318 13:19:14.328213 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:14.556589 master-0 kubenswrapper[7146]: I0318 13:19:14.556490 7146 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:19:14.556589 master-0 kubenswrapper[7146]: I0318 13:19:14.556582 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:19:15.327651 master-0 kubenswrapper[7146]: I0318 13:19:15.327555 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:15.327651 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:15.327651 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:15.327651 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:15.328031 master-0 kubenswrapper[7146]: I0318 13:19:15.327676 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:15.763915 master-0 kubenswrapper[7146]: E0318 13:19:15.763742 7146 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189df197ad09e4d5 openshift-kube-controller-manager 9816 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:f88d0f62c0688ab1909dc97f30d381b9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:11:01 +0000 UTC,LastTimestamp:2026-03-18 13:16:39.400272323 +0000 UTC m=+508.208489674,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:19:16.327968 master-0 kubenswrapper[7146]: I0318 13:19:16.327875 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:16.327968 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:16.327968 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:16.327968 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:16.328774 master-0 kubenswrapper[7146]: I0318 13:19:16.327979 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:17.329223 master-0 kubenswrapper[7146]: I0318 13:19:17.329154 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:17.329223 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:17.329223 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:17.329223 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:17.329840 master-0 kubenswrapper[7146]: I0318 13:19:17.329274 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:18.328060 master-0 kubenswrapper[7146]: I0318 13:19:18.327952 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:18.328060 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:18.328060 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:18.328060 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:18.328060 master-0 kubenswrapper[7146]: I0318 13:19:18.328023 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:19.328618 master-0 kubenswrapper[7146]: I0318 13:19:19.328564 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:19.328618 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:19.328618 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:19.328618 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:19.329548 master-0 kubenswrapper[7146]: I0318 13:19:19.329507 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:20.327012 master-0 kubenswrapper[7146]: E0318 13:19:20.326917 7146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 13:19:20.328763 master-0 kubenswrapper[7146]: I0318 13:19:20.328718 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:20.328763 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:19:20.328763 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:19:20.328763 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:19:20.329391 master-0 kubenswrapper[7146]: I0318 13:19:20.328830 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:20.329391 master-0 kubenswrapper[7146]: I0318 13:19:20.328890 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:19:20.329632 master-0 kubenswrapper[7146]: I0318 13:19:20.329596 7146 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"f8a3caa2163025eca93eda965504b4cc6018d77ba7a2820b766d5ff6236b73e8"} pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" containerMessage="Container router failed startup probe, will be restarted" Mar 18 13:19:20.329692 master-0 kubenswrapper[7146]: I0318 13:19:20.329643 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" containerID="cri-o://f8a3caa2163025eca93eda965504b4cc6018d77ba7a2820b766d5ff6236b73e8" gracePeriod=3600 Mar 18 13:19:23.332529 master-0 kubenswrapper[7146]: E0318 13:19:23.332461 7146 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 13:19:23.590587 master-0 kubenswrapper[7146]: I0318 13:19:23.590466 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"6f05f6747a6514adc5cde513919c3bdc29ffb6ed0ade2f6a425c19a551bb4a8c"} Mar 18 13:19:24.556774 master-0 kubenswrapper[7146]: I0318 13:19:24.556669 7146 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:19:24.557547 master-0 kubenswrapper[7146]: I0318 13:19:24.556780 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 13:19:24.601758 master-0 kubenswrapper[7146]: I0318 13:19:24.601653 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"848877c2f6d1ef07f17e6c1264b87f7b953b932fd22f35ae6b8c6b811221f114"} Mar 18 13:19:24.601758 master-0 kubenswrapper[7146]: I0318 13:19:24.601732 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"d4f41e4ce4d9d6e7de6cfd4a3e8227b63acd5c4e76d0cf03caa6732417545af9"} Mar 18 13:19:24.601758 master-0 kubenswrapper[7146]: I0318 13:19:24.601745 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"f81aec172d3fd5ba7f1a996a5480892d078f8a7bb1def93bddd40cd1c81466ab"} Mar 18 13:19:24.601758 master-0 kubenswrapper[7146]: I0318 13:19:24.601756 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"00152beaf4ef9173dcc6f816e81c474dc52f514b563ec7779b209fc77ec8bb11"} Mar 18 13:19:24.602104 master-0 kubenswrapper[7146]: I0318 13:19:24.602070 7146 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4d85aa7f-24d0-4deb-b57c-a3b530072b49" Mar 18 13:19:24.602104 master-0 kubenswrapper[7146]: I0318 13:19:24.602087 7146 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4d85aa7f-24d0-4deb-b57c-a3b530072b49" Mar 18 13:19:27.331367 master-0 kubenswrapper[7146]: I0318 13:19:27.331294 7146 status_manager.go:851] "Failed to get status for pod" podUID="baeb6380-95e4-4e10-9798-e1e22f20bade" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods operator-controller-controller-manager-57777556ff-4r95z)" Mar 18 13:19:29.393843 master-0 kubenswrapper[7146]: I0318 13:19:29.393747 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 13:19:29.394597 master-0 kubenswrapper[7146]: I0318 13:19:29.393854 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 13:19:33.709626 master-0 kubenswrapper[7146]: I0318 13:19:33.709569 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 13:19:33.829385 master-0 kubenswrapper[7146]: I0318 13:19:33.829117 7146 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 18 13:19:33.846503 master-0 kubenswrapper[7146]: I0318 13:19:33.846437 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 13:19:34.223141 master-0 kubenswrapper[7146]: I0318 13:19:34.222403 7146 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:40220->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 18 13:19:34.223141 master-0 kubenswrapper[7146]: I0318 13:19:34.222491 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:40220->127.0.0.1:10357: read: connection reset by peer" Mar 18 13:19:34.223141 master-0 kubenswrapper[7146]: I0318 13:19:34.222546 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:19:34.223409 master-0 kubenswrapper[7146]: I0318 13:19:34.223292 7146 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"9271e995129475a25190e498db48c9b9068a25ebaba03a20db9f05d72bd81dd8"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 18 13:19:34.223409 master-0 kubenswrapper[7146]: I0318 13:19:34.223374 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" containerID="cri-o://9271e995129475a25190e498db48c9b9068a25ebaba03a20db9f05d72bd81dd8" gracePeriod=30 Mar 18 13:19:34.572590 master-0 kubenswrapper[7146]: I0318 13:19:34.572463 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mptsw"] Mar 18 13:19:34.665377 master-0 kubenswrapper[7146]: I0318 13:19:34.665332 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/cluster-policy-controller/1.log" Mar 18 13:19:34.667895 master-0 kubenswrapper[7146]: I0318 13:19:34.667864 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager/0.log" Mar 18 13:19:34.667981 master-0 kubenswrapper[7146]: I0318 13:19:34.667914 7146 generic.go:334] "Generic (PLEG): container finished" podID="f88d0f62c0688ab1909dc97f30d381b9" containerID="9271e995129475a25190e498db48c9b9068a25ebaba03a20db9f05d72bd81dd8" exitCode=255 Mar 18 13:19:34.668023 master-0 kubenswrapper[7146]: I0318 13:19:34.667977 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f88d0f62c0688ab1909dc97f30d381b9","Type":"ContainerDied","Data":"9271e995129475a25190e498db48c9b9068a25ebaba03a20db9f05d72bd81dd8"} Mar 18 13:19:34.668061 master-0 kubenswrapper[7146]: I0318 13:19:34.668049 7146 scope.go:117] "RemoveContainer" containerID="1a3f1cc2c06b3716aaec57cfe182c6cc3f75f423059d28cf0ab2c58cba5e63fc" Mar 18 13:19:34.734464 master-0 kubenswrapper[7146]: I0318 13:19:34.734299 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mptsw"] Mar 18 13:19:35.367018 master-0 kubenswrapper[7146]: I0318 13:19:35.366924 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="956513bf-3b98-4b0d-aca7-ccc3fdf8ae12" path="/var/lib/kubelet/pods/956513bf-3b98-4b0d-aca7-ccc3fdf8ae12/volumes" Mar 18 13:19:35.674523 master-0 kubenswrapper[7146]: I0318 13:19:35.674465 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/cluster-policy-controller/1.log" Mar 18 13:19:35.675741 master-0 kubenswrapper[7146]: I0318 13:19:35.675713 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager/0.log" Mar 18 13:19:35.675813 master-0 kubenswrapper[7146]: I0318 13:19:35.675769 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f88d0f62c0688ab1909dc97f30d381b9","Type":"ContainerStarted","Data":"367d02605d8a3e9e3c96096d9291261bd92aa57e5240deccfe8dc8ed30df0f83"} Mar 18 13:19:37.328804 master-0 kubenswrapper[7146]: E0318 13:19:37.328659 7146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 13:19:39.414347 master-0 kubenswrapper[7146]: I0318 13:19:39.414302 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 13:19:41.555401 master-0 kubenswrapper[7146]: I0318 13:19:41.555309 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:19:41.555857 master-0 kubenswrapper[7146]: I0318 13:19:41.555594 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:19:41.710983 master-0 kubenswrapper[7146]: I0318 13:19:41.710890 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-wkw7f_1ad93612-ab12-4b30-984f-119e1b924a84/snapshot-controller/2.log" Mar 18 13:19:41.711390 master-0 kubenswrapper[7146]: I0318 13:19:41.711335 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-wkw7f_1ad93612-ab12-4b30-984f-119e1b924a84/snapshot-controller/1.log" Mar 18 13:19:41.711452 master-0 kubenswrapper[7146]: I0318 13:19:41.711386 7146 generic.go:334] "Generic (PLEG): container finished" podID="1ad93612-ab12-4b30-984f-119e1b924a84" containerID="35e1bd2871ee9933b0e98979a9f60d15197b30b9fcbe0f0644fec68cf9d194c1" exitCode=1 Mar 18 13:19:41.712067 master-0 kubenswrapper[7146]: I0318 13:19:41.712028 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" event={"ID":"1ad93612-ab12-4b30-984f-119e1b924a84","Type":"ContainerDied","Data":"35e1bd2871ee9933b0e98979a9f60d15197b30b9fcbe0f0644fec68cf9d194c1"} Mar 18 13:19:41.712136 master-0 kubenswrapper[7146]: I0318 13:19:41.712093 7146 scope.go:117] "RemoveContainer" containerID="9dec36619b4869559cc6b1399627ba288f0fc94abd2c5f064cedd0d6fd90fbe9" Mar 18 13:19:41.712709 master-0 kubenswrapper[7146]: I0318 13:19:41.712670 7146 scope.go:117] "RemoveContainer" containerID="35e1bd2871ee9933b0e98979a9f60d15197b30b9fcbe0f0644fec68cf9d194c1" Mar 18 13:19:41.712975 master-0 kubenswrapper[7146]: E0318 13:19:41.712928 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-wkw7f_openshift-cluster-storage-operator(1ad93612-ab12-4b30-984f-119e1b924a84)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" podUID="1ad93612-ab12-4b30-984f-119e1b924a84" Mar 18 13:19:42.723119 master-0 kubenswrapper[7146]: I0318 13:19:42.723039 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-wkw7f_1ad93612-ab12-4b30-984f-119e1b924a84/snapshot-controller/2.log" Mar 18 13:19:43.667847 master-0 kubenswrapper[7146]: E0318 13:19:43.667765 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:19:33Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:19:33Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:19:33Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:19:33Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:19:44.413822 master-0 kubenswrapper[7146]: I0318 13:19:44.413788 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 13:19:44.556074 master-0 kubenswrapper[7146]: I0318 13:19:44.555997 7146 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:19:44.556364 master-0 kubenswrapper[7146]: I0318 13:19:44.556107 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:19:46.833537 master-0 kubenswrapper[7146]: E0318 13:19:46.833422 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 13:19:49.767307 master-0 kubenswrapper[7146]: E0318 13:19:49.767101 7146 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189df197b94a53c4 openshift-kube-controller-manager 9823 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:f88d0f62c0688ab1909dc97f30d381b9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:11:02 +0000 UTC,LastTimestamp:2026-03-18 13:16:39.614800212 +0000 UTC m=+508.423017573,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:19:53.668582 master-0 kubenswrapper[7146]: E0318 13:19:53.668499 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:19:54.329567 master-0 kubenswrapper[7146]: E0318 13:19:54.329465 7146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 13:19:54.556121 master-0 kubenswrapper[7146]: I0318 13:19:54.556038 7146 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:19:54.556336 master-0 kubenswrapper[7146]: I0318 13:19:54.556165 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:19:57.359158 master-0 kubenswrapper[7146]: I0318 13:19:57.358706 7146 scope.go:117] "RemoveContainer" containerID="35e1bd2871ee9933b0e98979a9f60d15197b30b9fcbe0f0644fec68cf9d194c1" Mar 18 13:19:57.359158 master-0 kubenswrapper[7146]: E0318 13:19:57.359002 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-wkw7f_openshift-cluster-storage-operator(1ad93612-ab12-4b30-984f-119e1b924a84)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" podUID="1ad93612-ab12-4b30-984f-119e1b924a84" Mar 18 13:20:00.760498 master-0 kubenswrapper[7146]: E0318 13:20:00.760441 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 13:20:01.839323 master-0 kubenswrapper[7146]: I0318 13:20:01.839282 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-7w5g8_a01c92f5-7938-437d-8262-11598bd8023c/cluster-baremetal-operator/1.log" Mar 18 13:20:01.840263 master-0 kubenswrapper[7146]: I0318 13:20:01.840223 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-7w5g8_a01c92f5-7938-437d-8262-11598bd8023c/cluster-baremetal-operator/0.log" Mar 18 13:20:01.840333 master-0 kubenswrapper[7146]: I0318 13:20:01.840297 7146 generic.go:334] "Generic (PLEG): container finished" podID="a01c92f5-7938-437d-8262-11598bd8023c" containerID="d4c91e969faf5650da5d5727f2dfc66f398fbfef974094943a2e96586ef2e4ac" exitCode=1 Mar 18 13:20:01.840377 master-0 kubenswrapper[7146]: I0318 13:20:01.840338 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" event={"ID":"a01c92f5-7938-437d-8262-11598bd8023c","Type":"ContainerDied","Data":"d4c91e969faf5650da5d5727f2dfc66f398fbfef974094943a2e96586ef2e4ac"} Mar 18 13:20:01.840420 master-0 kubenswrapper[7146]: I0318 13:20:01.840389 7146 scope.go:117] "RemoveContainer" containerID="3d9515777e1454e99e50b07c4bb4005cbf649f4fb0161a941555e68ab2bef68b" Mar 18 13:20:01.841012 master-0 kubenswrapper[7146]: I0318 13:20:01.840978 7146 scope.go:117] "RemoveContainer" containerID="d4c91e969faf5650da5d5727f2dfc66f398fbfef974094943a2e96586ef2e4ac" Mar 18 13:20:01.841387 master-0 kubenswrapper[7146]: E0318 13:20:01.841352 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-7w5g8_openshift-machine-api(a01c92f5-7938-437d-8262-11598bd8023c)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" podUID="a01c92f5-7938-437d-8262-11598bd8023c" Mar 18 13:20:02.847426 master-0 kubenswrapper[7146]: I0318 13:20:02.847365 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-7w5g8_a01c92f5-7938-437d-8262-11598bd8023c/cluster-baremetal-operator/1.log" Mar 18 13:20:03.669422 master-0 kubenswrapper[7146]: E0318 13:20:03.669304 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:20:04.556315 master-0 kubenswrapper[7146]: I0318 13:20:04.556238 7146 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:20:04.556837 master-0 kubenswrapper[7146]: I0318 13:20:04.556317 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:20:04.556837 master-0 kubenswrapper[7146]: I0318 13:20:04.556380 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:20:04.557264 master-0 kubenswrapper[7146]: I0318 13:20:04.557151 7146 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"367d02605d8a3e9e3c96096d9291261bd92aa57e5240deccfe8dc8ed30df0f83"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 18 13:20:04.557336 master-0 kubenswrapper[7146]: I0318 13:20:04.557260 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" containerID="cri-o://367d02605d8a3e9e3c96096d9291261bd92aa57e5240deccfe8dc8ed30df0f83" gracePeriod=30 Mar 18 13:20:04.859880 master-0 kubenswrapper[7146]: I0318 13:20:04.859826 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/cluster-policy-controller/2.log" Mar 18 13:20:04.860519 master-0 kubenswrapper[7146]: I0318 13:20:04.860483 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/cluster-policy-controller/1.log" Mar 18 13:20:04.861978 master-0 kubenswrapper[7146]: I0318 13:20:04.861952 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager/0.log" Mar 18 13:20:04.862058 master-0 kubenswrapper[7146]: I0318 13:20:04.861998 7146 generic.go:334] "Generic (PLEG): container finished" podID="f88d0f62c0688ab1909dc97f30d381b9" containerID="367d02605d8a3e9e3c96096d9291261bd92aa57e5240deccfe8dc8ed30df0f83" exitCode=255 Mar 18 13:20:04.862058 master-0 kubenswrapper[7146]: I0318 13:20:04.862032 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f88d0f62c0688ab1909dc97f30d381b9","Type":"ContainerDied","Data":"367d02605d8a3e9e3c96096d9291261bd92aa57e5240deccfe8dc8ed30df0f83"} Mar 18 13:20:04.862143 master-0 kubenswrapper[7146]: I0318 13:20:04.862067 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f88d0f62c0688ab1909dc97f30d381b9","Type":"ContainerStarted","Data":"5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b"} Mar 18 13:20:04.862143 master-0 kubenswrapper[7146]: I0318 13:20:04.862084 7146 scope.go:117] "RemoveContainer" containerID="9271e995129475a25190e498db48c9b9068a25ebaba03a20db9f05d72bd81dd8" Mar 18 13:20:05.869926 master-0 kubenswrapper[7146]: I0318 13:20:05.869827 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/cluster-policy-controller/2.log" Mar 18 13:20:05.871417 master-0 kubenswrapper[7146]: I0318 13:20:05.871374 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager/0.log" Mar 18 13:20:06.880198 master-0 kubenswrapper[7146]: I0318 13:20:06.880078 7146 generic.go:334] "Generic (PLEG): container finished" podID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerID="f8a3caa2163025eca93eda965504b4cc6018d77ba7a2820b766d5ff6236b73e8" exitCode=0 Mar 18 13:20:06.880198 master-0 kubenswrapper[7146]: I0318 13:20:06.880150 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" event={"ID":"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf","Type":"ContainerDied","Data":"f8a3caa2163025eca93eda965504b4cc6018d77ba7a2820b766d5ff6236b73e8"} Mar 18 13:20:06.880198 master-0 kubenswrapper[7146]: I0318 13:20:06.880224 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" event={"ID":"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf","Type":"ContainerStarted","Data":"0aa6e3b114a8524f519800cee5439f1ad3e156a1def4a154cf20f82ebe9a3ef2"} Mar 18 13:20:06.881047 master-0 kubenswrapper[7146]: I0318 13:20:06.880253 7146 scope.go:117] "RemoveContainer" containerID="31665db688945aa094b6891895ce672425d61018bce7b516675fdc844fb9eb7e" Mar 18 13:20:07.326607 master-0 kubenswrapper[7146]: I0318 13:20:07.326511 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:20:07.330176 master-0 kubenswrapper[7146]: I0318 13:20:07.330079 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:07.330176 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:07.330176 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:07.330176 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:07.330645 master-0 kubenswrapper[7146]: I0318 13:20:07.330178 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:08.328062 master-0 kubenswrapper[7146]: I0318 13:20:08.327995 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:08.328062 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:08.328062 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:08.328062 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:08.328912 master-0 kubenswrapper[7146]: I0318 13:20:08.328144 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:09.328470 master-0 kubenswrapper[7146]: I0318 13:20:09.328402 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:09.328470 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:09.328470 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:09.328470 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:09.329270 master-0 kubenswrapper[7146]: I0318 13:20:09.328510 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:09.358573 master-0 kubenswrapper[7146]: I0318 13:20:09.358478 7146 scope.go:117] "RemoveContainer" containerID="35e1bd2871ee9933b0e98979a9f60d15197b30b9fcbe0f0644fec68cf9d194c1" Mar 18 13:20:09.905558 master-0 kubenswrapper[7146]: I0318 13:20:09.905417 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-wkw7f_1ad93612-ab12-4b30-984f-119e1b924a84/snapshot-controller/2.log" Mar 18 13:20:09.905558 master-0 kubenswrapper[7146]: I0318 13:20:09.905492 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" event={"ID":"1ad93612-ab12-4b30-984f-119e1b924a84","Type":"ContainerStarted","Data":"ddcbf11a00d3d2b2cc8dba953e8ea411de73bf086be68e4a972c789cfa038823"} Mar 18 13:20:10.328306 master-0 kubenswrapper[7146]: I0318 13:20:10.328241 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:10.328306 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:10.328306 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:10.328306 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:10.329132 master-0 kubenswrapper[7146]: I0318 13:20:10.328316 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:11.327535 master-0 kubenswrapper[7146]: I0318 13:20:11.327477 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:11.327535 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:11.327535 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:11.327535 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:11.327868 master-0 kubenswrapper[7146]: I0318 13:20:11.327550 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:11.330781 master-0 kubenswrapper[7146]: E0318 13:20:11.330728 7146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 13:20:11.556208 master-0 kubenswrapper[7146]: I0318 13:20:11.556155 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:20:11.556208 master-0 kubenswrapper[7146]: I0318 13:20:11.556211 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:20:12.328608 master-0 kubenswrapper[7146]: I0318 13:20:12.328571 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:12.328608 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:12.328608 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:12.328608 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:12.328990 master-0 kubenswrapper[7146]: I0318 13:20:12.328949 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:13.327977 master-0 kubenswrapper[7146]: I0318 13:20:13.327878 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:13.327977 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:13.327977 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:13.327977 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:13.327977 master-0 kubenswrapper[7146]: I0318 13:20:13.327958 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:13.671292 master-0 kubenswrapper[7146]: E0318 13:20:13.671163 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:20:14.325338 master-0 kubenswrapper[7146]: I0318 13:20:14.325270 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:20:14.327262 master-0 kubenswrapper[7146]: I0318 13:20:14.327210 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:14.327262 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:14.327262 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:14.327262 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:14.327432 master-0 kubenswrapper[7146]: I0318 13:20:14.327284 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:14.556712 master-0 kubenswrapper[7146]: I0318 13:20:14.556607 7146 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:20:14.557286 master-0 kubenswrapper[7146]: I0318 13:20:14.556723 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:20:15.327527 master-0 kubenswrapper[7146]: I0318 13:20:15.327454 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:15.327527 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:15.327527 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:15.327527 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:15.327783 master-0 kubenswrapper[7146]: I0318 13:20:15.327557 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:16.327214 master-0 kubenswrapper[7146]: I0318 13:20:16.327157 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:16.327214 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:16.327214 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:16.327214 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:16.327772 master-0 kubenswrapper[7146]: I0318 13:20:16.327235 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:16.957083 master-0 kubenswrapper[7146]: I0318 13:20:16.957030 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/4.log" Mar 18 13:20:16.957641 master-0 kubenswrapper[7146]: I0318 13:20:16.957573 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/3.log" Mar 18 13:20:16.958213 master-0 kubenswrapper[7146]: I0318 13:20:16.958159 7146 generic.go:334] "Generic (PLEG): container finished" podID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" containerID="5b4c84f643c308b4da498c0b191698ddaa3218b818a04462557cc1d1c093013c" exitCode=1 Mar 18 13:20:16.958213 master-0 kubenswrapper[7146]: I0318 13:20:16.958197 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" event={"ID":"f2b92a53-0b61-4e1d-a306-f9a498e48b38","Type":"ContainerDied","Data":"5b4c84f643c308b4da498c0b191698ddaa3218b818a04462557cc1d1c093013c"} Mar 18 13:20:16.958383 master-0 kubenswrapper[7146]: I0318 13:20:16.958241 7146 scope.go:117] "RemoveContainer" containerID="c1cbee78b9223d65a91dcbbb50864bbde5c7ce89aa7a2abeca708031563e11b9" Mar 18 13:20:16.959006 master-0 kubenswrapper[7146]: I0318 13:20:16.958918 7146 scope.go:117] "RemoveContainer" containerID="5b4c84f643c308b4da498c0b191698ddaa3218b818a04462557cc1d1c093013c" Mar 18 13:20:16.959235 master-0 kubenswrapper[7146]: E0318 13:20:16.959197 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-xwqsb_openshift-ingress-operator(f2b92a53-0b61-4e1d-a306-f9a498e48b38)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" podUID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" Mar 18 13:20:17.328986 master-0 kubenswrapper[7146]: I0318 13:20:17.328910 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:17.328986 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:17.328986 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:17.328986 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:17.329681 master-0 kubenswrapper[7146]: I0318 13:20:17.329018 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:17.359001 master-0 kubenswrapper[7146]: I0318 13:20:17.358572 7146 scope.go:117] "RemoveContainer" containerID="d4c91e969faf5650da5d5727f2dfc66f398fbfef974094943a2e96586ef2e4ac" Mar 18 13:20:17.966483 master-0 kubenswrapper[7146]: I0318 13:20:17.966448 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/4.log" Mar 18 13:20:17.968901 master-0 kubenswrapper[7146]: I0318 13:20:17.968871 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-7w5g8_a01c92f5-7938-437d-8262-11598bd8023c/cluster-baremetal-operator/1.log" Mar 18 13:20:17.969609 master-0 kubenswrapper[7146]: I0318 13:20:17.969570 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" event={"ID":"a01c92f5-7938-437d-8262-11598bd8023c","Type":"ContainerStarted","Data":"3fe2b30d3a88bc253d1cf4b9fbf09e7d5bc69a80e3d0a14ba44ecbd5f6425a1e"} Mar 18 13:20:18.328840 master-0 kubenswrapper[7146]: I0318 13:20:18.328666 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:18.328840 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:18.328840 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:18.328840 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:18.328840 master-0 kubenswrapper[7146]: I0318 13:20:18.328760 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:19.327973 master-0 kubenswrapper[7146]: I0318 13:20:19.327892 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:19.327973 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:19.327973 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:19.327973 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:19.327973 master-0 kubenswrapper[7146]: I0318 13:20:19.327965 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:20.327777 master-0 kubenswrapper[7146]: I0318 13:20:20.327691 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:20.327777 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:20.327777 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:20.327777 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:20.328400 master-0 kubenswrapper[7146]: I0318 13:20:20.327792 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:21.329148 master-0 kubenswrapper[7146]: I0318 13:20:21.328497 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:21.329148 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:21.329148 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:21.329148 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:21.330091 master-0 kubenswrapper[7146]: I0318 13:20:21.329263 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:22.327398 master-0 kubenswrapper[7146]: I0318 13:20:22.327340 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:22.327398 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:22.327398 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:22.327398 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:22.327398 master-0 kubenswrapper[7146]: I0318 13:20:22.327400 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:23.328979 master-0 kubenswrapper[7146]: I0318 13:20:23.328886 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:23.328979 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:23.328979 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:23.328979 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:23.329851 master-0 kubenswrapper[7146]: I0318 13:20:23.328988 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:23.672304 master-0 kubenswrapper[7146]: E0318 13:20:23.672059 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:20:23.672304 master-0 kubenswrapper[7146]: E0318 13:20:23.672106 7146 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 13:20:23.770148 master-0 kubenswrapper[7146]: E0318 13:20:23.769897 7146 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189df197ba6a0988 openshift-kube-controller-manager 9824 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:f88d0f62c0688ab1909dc97f30d381b9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:11:02 +0000 UTC,LastTimestamp:2026-03-18 13:16:39.624218839 +0000 UTC m=+508.432436200,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:20:24.328885 master-0 kubenswrapper[7146]: I0318 13:20:24.328822 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:24.328885 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:24.328885 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:24.328885 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:24.329433 master-0 kubenswrapper[7146]: I0318 13:20:24.328912 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:24.556236 master-0 kubenswrapper[7146]: I0318 13:20:24.556158 7146 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:20:24.556482 master-0 kubenswrapper[7146]: I0318 13:20:24.556244 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:20:25.327541 master-0 kubenswrapper[7146]: I0318 13:20:25.327463 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:25.327541 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:25.327541 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:25.327541 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:25.327875 master-0 kubenswrapper[7146]: I0318 13:20:25.327533 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:26.328316 master-0 kubenswrapper[7146]: I0318 13:20:26.328231 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:26.328316 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:26.328316 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:26.328316 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:26.328967 master-0 kubenswrapper[7146]: I0318 13:20:26.328329 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:27.327569 master-0 kubenswrapper[7146]: I0318 13:20:27.327490 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:27.327569 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:27.327569 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:27.327569 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:27.327569 master-0 kubenswrapper[7146]: I0318 13:20:27.327560 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:28.328574 master-0 kubenswrapper[7146]: I0318 13:20:28.328358 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:28.328574 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:28.328574 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:28.328574 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:28.328574 master-0 kubenswrapper[7146]: I0318 13:20:28.328449 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:28.332682 master-0 kubenswrapper[7146]: E0318 13:20:28.332591 7146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 13:20:29.327781 master-0 kubenswrapper[7146]: I0318 13:20:29.327703 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:29.327781 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:29.327781 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:29.327781 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:29.328218 master-0 kubenswrapper[7146]: I0318 13:20:29.327777 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:29.359050 master-0 kubenswrapper[7146]: I0318 13:20:29.358757 7146 scope.go:117] "RemoveContainer" containerID="5b4c84f643c308b4da498c0b191698ddaa3218b818a04462557cc1d1c093013c" Mar 18 13:20:29.359584 master-0 kubenswrapper[7146]: E0318 13:20:29.359219 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-xwqsb_openshift-ingress-operator(f2b92a53-0b61-4e1d-a306-f9a498e48b38)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" podUID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" Mar 18 13:20:30.327957 master-0 kubenswrapper[7146]: I0318 13:20:30.327864 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:30.327957 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:30.327957 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:30.327957 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:30.327957 master-0 kubenswrapper[7146]: I0318 13:20:30.327927 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:31.328148 master-0 kubenswrapper[7146]: I0318 13:20:31.328009 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:31.328148 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:31.328148 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:31.328148 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:31.328880 master-0 kubenswrapper[7146]: I0318 13:20:31.328186 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:32.327986 master-0 kubenswrapper[7146]: I0318 13:20:32.327912 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:32.327986 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:32.327986 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:32.327986 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:32.328727 master-0 kubenswrapper[7146]: I0318 13:20:32.328013 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:33.327549 master-0 kubenswrapper[7146]: I0318 13:20:33.327488 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:33.327549 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:33.327549 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:33.327549 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:33.327912 master-0 kubenswrapper[7146]: I0318 13:20:33.327566 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:34.327517 master-0 kubenswrapper[7146]: I0318 13:20:34.327456 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:34.327517 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:34.327517 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:34.327517 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:34.327517 master-0 kubenswrapper[7146]: I0318 13:20:34.327517 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:34.556174 master-0 kubenswrapper[7146]: I0318 13:20:34.556099 7146 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:20:34.556174 master-0 kubenswrapper[7146]: I0318 13:20:34.556163 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:20:34.556495 master-0 kubenswrapper[7146]: I0318 13:20:34.556213 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:20:34.556901 master-0 kubenswrapper[7146]: I0318 13:20:34.556860 7146 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 18 13:20:34.557046 master-0 kubenswrapper[7146]: I0318 13:20:34.556969 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" containerID="cri-o://5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b" gracePeriod=30 Mar 18 13:20:34.671932 master-0 kubenswrapper[7146]: E0318 13:20:34.671884 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(f88d0f62c0688ab1909dc97f30d381b9)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" Mar 18 13:20:35.072799 master-0 kubenswrapper[7146]: I0318 13:20:35.072718 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/cluster-policy-controller/3.log" Mar 18 13:20:35.073359 master-0 kubenswrapper[7146]: I0318 13:20:35.073313 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/cluster-policy-controller/2.log" Mar 18 13:20:35.074664 master-0 kubenswrapper[7146]: I0318 13:20:35.074638 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager/0.log" Mar 18 13:20:35.074725 master-0 kubenswrapper[7146]: I0318 13:20:35.074673 7146 generic.go:334] "Generic (PLEG): container finished" podID="f88d0f62c0688ab1909dc97f30d381b9" containerID="5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b" exitCode=255 Mar 18 13:20:35.074725 master-0 kubenswrapper[7146]: I0318 13:20:35.074699 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f88d0f62c0688ab1909dc97f30d381b9","Type":"ContainerDied","Data":"5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b"} Mar 18 13:20:35.074792 master-0 kubenswrapper[7146]: I0318 13:20:35.074731 7146 scope.go:117] "RemoveContainer" containerID="367d02605d8a3e9e3c96096d9291261bd92aa57e5240deccfe8dc8ed30df0f83" Mar 18 13:20:35.075651 master-0 kubenswrapper[7146]: I0318 13:20:35.075605 7146 scope.go:117] "RemoveContainer" containerID="5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b" Mar 18 13:20:35.076190 master-0 kubenswrapper[7146]: E0318 13:20:35.076147 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(f88d0f62c0688ab1909dc97f30d381b9)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" Mar 18 13:20:35.328107 master-0 kubenswrapper[7146]: I0318 13:20:35.327962 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:35.328107 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:35.328107 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:35.328107 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:35.328652 master-0 kubenswrapper[7146]: I0318 13:20:35.328115 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:36.081802 master-0 kubenswrapper[7146]: I0318 13:20:36.081758 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/cluster-policy-controller/3.log" Mar 18 13:20:36.083276 master-0 kubenswrapper[7146]: I0318 13:20:36.083250 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager/0.log" Mar 18 13:20:36.327619 master-0 kubenswrapper[7146]: I0318 13:20:36.327546 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:36.327619 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:36.327619 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:36.327619 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:36.327954 master-0 kubenswrapper[7146]: I0318 13:20:36.327616 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:37.326813 master-0 kubenswrapper[7146]: I0318 13:20:37.326709 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:37.326813 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:37.326813 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:37.326813 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:37.327443 master-0 kubenswrapper[7146]: I0318 13:20:37.326847 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:38.327409 master-0 kubenswrapper[7146]: I0318 13:20:38.327356 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:38.327409 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:38.327409 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:38.327409 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:38.328268 master-0 kubenswrapper[7146]: I0318 13:20:38.327415 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:39.328460 master-0 kubenswrapper[7146]: I0318 13:20:39.328351 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:39.328460 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:39.328460 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:39.328460 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:39.328460 master-0 kubenswrapper[7146]: I0318 13:20:39.328415 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:40.106048 master-0 kubenswrapper[7146]: I0318 13:20:40.105998 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-wkw7f_1ad93612-ab12-4b30-984f-119e1b924a84/snapshot-controller/3.log" Mar 18 13:20:40.106436 master-0 kubenswrapper[7146]: I0318 13:20:40.106404 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-wkw7f_1ad93612-ab12-4b30-984f-119e1b924a84/snapshot-controller/2.log" Mar 18 13:20:40.106499 master-0 kubenswrapper[7146]: I0318 13:20:40.106445 7146 generic.go:334] "Generic (PLEG): container finished" podID="1ad93612-ab12-4b30-984f-119e1b924a84" containerID="ddcbf11a00d3d2b2cc8dba953e8ea411de73bf086be68e4a972c789cfa038823" exitCode=1 Mar 18 13:20:40.106499 master-0 kubenswrapper[7146]: I0318 13:20:40.106473 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" event={"ID":"1ad93612-ab12-4b30-984f-119e1b924a84","Type":"ContainerDied","Data":"ddcbf11a00d3d2b2cc8dba953e8ea411de73bf086be68e4a972c789cfa038823"} Mar 18 13:20:40.106563 master-0 kubenswrapper[7146]: I0318 13:20:40.106509 7146 scope.go:117] "RemoveContainer" containerID="35e1bd2871ee9933b0e98979a9f60d15197b30b9fcbe0f0644fec68cf9d194c1" Mar 18 13:20:40.107096 master-0 kubenswrapper[7146]: I0318 13:20:40.107046 7146 scope.go:117] "RemoveContainer" containerID="ddcbf11a00d3d2b2cc8dba953e8ea411de73bf086be68e4a972c789cfa038823" Mar 18 13:20:40.107326 master-0 kubenswrapper[7146]: E0318 13:20:40.107280 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-wkw7f_openshift-cluster-storage-operator(1ad93612-ab12-4b30-984f-119e1b924a84)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" podUID="1ad93612-ab12-4b30-984f-119e1b924a84" Mar 18 13:20:40.328073 master-0 kubenswrapper[7146]: I0318 13:20:40.328002 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:40.328073 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:40.328073 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:40.328073 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:40.328860 master-0 kubenswrapper[7146]: I0318 13:20:40.328096 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:41.113082 master-0 kubenswrapper[7146]: I0318 13:20:41.113029 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-wkw7f_1ad93612-ab12-4b30-984f-119e1b924a84/snapshot-controller/3.log" Mar 18 13:20:41.327295 master-0 kubenswrapper[7146]: I0318 13:20:41.327251 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:41.327295 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:41.327295 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:41.327295 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:41.327867 master-0 kubenswrapper[7146]: I0318 13:20:41.327680 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:41.556252 master-0 kubenswrapper[7146]: I0318 13:20:41.556128 7146 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:20:41.557035 master-0 kubenswrapper[7146]: I0318 13:20:41.557001 7146 scope.go:117] "RemoveContainer" containerID="5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b" Mar 18 13:20:41.557278 master-0 kubenswrapper[7146]: E0318 13:20:41.557231 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(f88d0f62c0688ab1909dc97f30d381b9)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" Mar 18 13:20:42.327425 master-0 kubenswrapper[7146]: I0318 13:20:42.327325 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:42.327425 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:42.327425 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:42.327425 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:42.327810 master-0 kubenswrapper[7146]: I0318 13:20:42.327785 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:42.358363 master-0 kubenswrapper[7146]: I0318 13:20:42.358312 7146 scope.go:117] "RemoveContainer" containerID="5b4c84f643c308b4da498c0b191698ddaa3218b818a04462557cc1d1c093013c" Mar 18 13:20:42.358829 master-0 kubenswrapper[7146]: E0318 13:20:42.358805 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-xwqsb_openshift-ingress-operator(f2b92a53-0b61-4e1d-a306-f9a498e48b38)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" podUID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" Mar 18 13:20:43.328446 master-0 kubenswrapper[7146]: I0318 13:20:43.328289 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:43.328446 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:43.328446 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:43.328446 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:43.329397 master-0 kubenswrapper[7146]: I0318 13:20:43.329356 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:44.327854 master-0 kubenswrapper[7146]: I0318 13:20:44.327769 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:44.327854 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:44.327854 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:44.327854 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:44.328234 master-0 kubenswrapper[7146]: I0318 13:20:44.327868 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:45.328166 master-0 kubenswrapper[7146]: I0318 13:20:45.328103 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:45.328166 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:45.328166 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:45.328166 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:45.328702 master-0 kubenswrapper[7146]: I0318 13:20:45.328184 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:45.333611 master-0 kubenswrapper[7146]: E0318 13:20:45.333545 7146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 13:20:46.328850 master-0 kubenswrapper[7146]: I0318 13:20:46.328773 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:46.328850 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:46.328850 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:46.328850 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:46.328850 master-0 kubenswrapper[7146]: I0318 13:20:46.328849 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:47.327425 master-0 kubenswrapper[7146]: I0318 13:20:47.327373 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:47.327425 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:47.327425 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:47.327425 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:47.328054 master-0 kubenswrapper[7146]: I0318 13:20:47.327452 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:48.327853 master-0 kubenswrapper[7146]: I0318 13:20:48.327604 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:48.327853 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:48.327853 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:48.327853 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:48.327853 master-0 kubenswrapper[7146]: I0318 13:20:48.327697 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:49.329074 master-0 kubenswrapper[7146]: I0318 13:20:49.328982 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:49.329074 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:49.329074 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:49.329074 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:49.329811 master-0 kubenswrapper[7146]: I0318 13:20:49.329121 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:50.328247 master-0 kubenswrapper[7146]: I0318 13:20:50.328196 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:50.328247 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:50.328247 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:50.328247 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:50.329766 master-0 kubenswrapper[7146]: I0318 13:20:50.329105 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:51.327833 master-0 kubenswrapper[7146]: I0318 13:20:51.327727 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:51.327833 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:51.327833 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:51.327833 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:51.328430 master-0 kubenswrapper[7146]: I0318 13:20:51.328394 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:52.327776 master-0 kubenswrapper[7146]: I0318 13:20:52.327705 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:52.327776 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:52.327776 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:52.327776 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:52.328690 master-0 kubenswrapper[7146]: I0318 13:20:52.327805 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:52.358721 master-0 kubenswrapper[7146]: I0318 13:20:52.358673 7146 scope.go:117] "RemoveContainer" containerID="ddcbf11a00d3d2b2cc8dba953e8ea411de73bf086be68e4a972c789cfa038823" Mar 18 13:20:52.359367 master-0 kubenswrapper[7146]: E0318 13:20:52.359340 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-wkw7f_openshift-cluster-storage-operator(1ad93612-ab12-4b30-984f-119e1b924a84)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" podUID="1ad93612-ab12-4b30-984f-119e1b924a84" Mar 18 13:20:53.329624 master-0 kubenswrapper[7146]: I0318 13:20:53.329579 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:53.329624 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:53.329624 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:53.329624 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:53.330542 master-0 kubenswrapper[7146]: I0318 13:20:53.330510 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:53.358730 master-0 kubenswrapper[7146]: I0318 13:20:53.358691 7146 scope.go:117] "RemoveContainer" containerID="5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b" Mar 18 13:20:53.359256 master-0 kubenswrapper[7146]: E0318 13:20:53.359230 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(f88d0f62c0688ab1909dc97f30d381b9)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" Mar 18 13:20:54.049231 master-0 kubenswrapper[7146]: E0318 13:20:54.049170 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:20:44Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:20:44Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:20:44Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:20:44Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:20:54.329660 master-0 kubenswrapper[7146]: I0318 13:20:54.329501 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:54.329660 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:54.329660 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:54.329660 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:54.329660 master-0 kubenswrapper[7146]: I0318 13:20:54.329628 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:55.328100 master-0 kubenswrapper[7146]: I0318 13:20:55.328017 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:55.328100 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:55.328100 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:55.328100 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:55.328447 master-0 kubenswrapper[7146]: I0318 13:20:55.328132 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:55.358330 master-0 kubenswrapper[7146]: I0318 13:20:55.358258 7146 scope.go:117] "RemoveContainer" containerID="5b4c84f643c308b4da498c0b191698ddaa3218b818a04462557cc1d1c093013c" Mar 18 13:20:55.358877 master-0 kubenswrapper[7146]: E0318 13:20:55.358573 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-xwqsb_openshift-ingress-operator(f2b92a53-0b61-4e1d-a306-f9a498e48b38)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" podUID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" Mar 18 13:20:56.328505 master-0 kubenswrapper[7146]: I0318 13:20:56.328444 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:56.328505 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:56.328505 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:56.328505 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:56.328800 master-0 kubenswrapper[7146]: I0318 13:20:56.328538 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:57.327993 master-0 kubenswrapper[7146]: I0318 13:20:57.327909 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:57.327993 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:57.327993 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:57.327993 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:57.328633 master-0 kubenswrapper[7146]: I0318 13:20:57.328005 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:57.773152 master-0 kubenswrapper[7146]: E0318 13:20:57.773013 7146 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189df1e6f13944a8 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:BackOff,Message:Back-off restarting failed container kube-scheduler in pod bootstrap-kube-scheduler-master-0_kube-system(c83737980b9ee109184b1d78e942cf36),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:16:42.445677736 +0000 UTC m=+511.253895097,LastTimestamp:2026-03-18 13:16:42.445677736 +0000 UTC m=+511.253895097,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:20:58.327952 master-0 kubenswrapper[7146]: I0318 13:20:58.327883 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:58.327952 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:58.327952 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:58.327952 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:58.328669 master-0 kubenswrapper[7146]: I0318 13:20:58.327951 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:59.329244 master-0 kubenswrapper[7146]: I0318 13:20:59.329188 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:59.329244 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:20:59.329244 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:20:59.329244 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:20:59.329962 master-0 kubenswrapper[7146]: I0318 13:20:59.329256 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:00.226385 master-0 kubenswrapper[7146]: I0318 13:21:00.226314 7146 generic.go:334] "Generic (PLEG): container finished" podID="2385db6b-4286-4839-822c-aa9c52290172" containerID="76706e531d703321ab797434284e0ec77d46262c1f93022a12f301f5e424b532" exitCode=0 Mar 18 13:21:00.226385 master-0 kubenswrapper[7146]: I0318 13:21:00.226367 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" event={"ID":"2385db6b-4286-4839-822c-aa9c52290172","Type":"ContainerDied","Data":"76706e531d703321ab797434284e0ec77d46262c1f93022a12f301f5e424b532"} Mar 18 13:21:00.226909 master-0 kubenswrapper[7146]: I0318 13:21:00.226879 7146 scope.go:117] "RemoveContainer" containerID="76706e531d703321ab797434284e0ec77d46262c1f93022a12f301f5e424b532" Mar 18 13:21:00.329007 master-0 kubenswrapper[7146]: I0318 13:21:00.328953 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:00.329007 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:00.329007 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:00.329007 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:00.329644 master-0 kubenswrapper[7146]: I0318 13:21:00.329022 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:01.236053 master-0 kubenswrapper[7146]: I0318 13:21:01.235992 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" event={"ID":"2385db6b-4286-4839-822c-aa9c52290172","Type":"ContainerStarted","Data":"9f0608640b4d6afce19fcd199954782456f7bfac9c9bcd6b73763915f3e7b0c0"} Mar 18 13:21:01.327331 master-0 kubenswrapper[7146]: I0318 13:21:01.327280 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:01.327331 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:01.327331 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:01.327331 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:01.327587 master-0 kubenswrapper[7146]: I0318 13:21:01.327347 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:02.245392 master-0 kubenswrapper[7146]: I0318 13:21:02.245359 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-67dcd4998-cwpkz_16a930da-d793-486f-bcef-cf042d3c427d/cluster-olm-operator/0.log" Mar 18 13:21:02.246006 master-0 kubenswrapper[7146]: I0318 13:21:02.245896 7146 generic.go:334] "Generic (PLEG): container finished" podID="16a930da-d793-486f-bcef-cf042d3c427d" containerID="3683163827a4edece2407b15e519e57ed5810d9901b275e4063ae3e6c8a46a7c" exitCode=0 Mar 18 13:21:02.246006 master-0 kubenswrapper[7146]: I0318 13:21:02.245950 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" event={"ID":"16a930da-d793-486f-bcef-cf042d3c427d","Type":"ContainerDied","Data":"3683163827a4edece2407b15e519e57ed5810d9901b275e4063ae3e6c8a46a7c"} Mar 18 13:21:02.246006 master-0 kubenswrapper[7146]: I0318 13:21:02.245983 7146 scope.go:117] "RemoveContainer" containerID="05341255eca6050db7cb2260fb5dd7a45d91c7026314e974f2c6b81b9259883f" Mar 18 13:21:02.246466 master-0 kubenswrapper[7146]: I0318 13:21:02.246444 7146 scope.go:117] "RemoveContainer" containerID="3683163827a4edece2407b15e519e57ed5810d9901b275e4063ae3e6c8a46a7c" Mar 18 13:21:02.259516 master-0 kubenswrapper[7146]: I0318 13:21:02.258000 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-q8vxr_bd033b5b-af07-4e69-9a5c-46f7c9bde95a/cluster-autoscaler-operator/0.log" Mar 18 13:21:02.259516 master-0 kubenswrapper[7146]: I0318 13:21:02.258525 7146 generic.go:334] "Generic (PLEG): container finished" podID="bd033b5b-af07-4e69-9a5c-46f7c9bde95a" containerID="e20cb392c2151c9b567d2f9cb92d9caffc6ffa0a0c94ec6c22fe2417cecc2fef" exitCode=255 Mar 18 13:21:02.259516 master-0 kubenswrapper[7146]: I0318 13:21:02.258606 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" event={"ID":"bd033b5b-af07-4e69-9a5c-46f7c9bde95a","Type":"ContainerDied","Data":"e20cb392c2151c9b567d2f9cb92d9caffc6ffa0a0c94ec6c22fe2417cecc2fef"} Mar 18 13:21:02.259868 master-0 kubenswrapper[7146]: I0318 13:21:02.259606 7146 scope.go:117] "RemoveContainer" containerID="e20cb392c2151c9b567d2f9cb92d9caffc6ffa0a0c94ec6c22fe2417cecc2fef" Mar 18 13:21:02.266672 master-0 kubenswrapper[7146]: I0318 13:21:02.266635 7146 generic.go:334] "Generic (PLEG): container finished" podID="65cfa12a-0711-4fba-8859-73a3f8f250a9" containerID="8a450d61a86ca02f43befd316491f266f23f5f89125343df32e08e9b38e85140" exitCode=0 Mar 18 13:21:02.266744 master-0 kubenswrapper[7146]: I0318 13:21:02.266704 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" event={"ID":"65cfa12a-0711-4fba-8859-73a3f8f250a9","Type":"ContainerDied","Data":"8a450d61a86ca02f43befd316491f266f23f5f89125343df32e08e9b38e85140"} Mar 18 13:21:02.268541 master-0 kubenswrapper[7146]: I0318 13:21:02.268205 7146 scope.go:117] "RemoveContainer" containerID="8a450d61a86ca02f43befd316491f266f23f5f89125343df32e08e9b38e85140" Mar 18 13:21:02.275862 master-0 kubenswrapper[7146]: I0318 13:21:02.275581 7146 generic.go:334] "Generic (PLEG): container finished" podID="cb471665-2b07-48df-9881-3fb663390b23" containerID="68c5ffa759fcc437f54d7bd3e789e8c2d2ddd9ad3679a98335c6cd2c8429c33c" exitCode=0 Mar 18 13:21:02.276029 master-0 kubenswrapper[7146]: I0318 13:21:02.275996 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" event={"ID":"cb471665-2b07-48df-9881-3fb663390b23","Type":"ContainerDied","Data":"68c5ffa759fcc437f54d7bd3e789e8c2d2ddd9ad3679a98335c6cd2c8429c33c"} Mar 18 13:21:02.278680 master-0 kubenswrapper[7146]: I0318 13:21:02.276793 7146 scope.go:117] "RemoveContainer" containerID="68c5ffa759fcc437f54d7bd3e789e8c2d2ddd9ad3679a98335c6cd2c8429c33c" Mar 18 13:21:02.279380 master-0 kubenswrapper[7146]: I0318 13:21:02.279325 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-p6tvz_369e9689-e2f6-4276-b096-8db094f8d6ae/cluster-node-tuning-operator/0.log" Mar 18 13:21:02.279457 master-0 kubenswrapper[7146]: I0318 13:21:02.279377 7146 generic.go:334] "Generic (PLEG): container finished" podID="369e9689-e2f6-4276-b096-8db094f8d6ae" containerID="a4b53bab35719b1de9b4d4e1f4c3fdf356bb114dd12ac3e84e5af4fe101ae6bf" exitCode=1 Mar 18 13:21:02.279457 master-0 kubenswrapper[7146]: I0318 13:21:02.279438 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" event={"ID":"369e9689-e2f6-4276-b096-8db094f8d6ae","Type":"ContainerDied","Data":"a4b53bab35719b1de9b4d4e1f4c3fdf356bb114dd12ac3e84e5af4fe101ae6bf"} Mar 18 13:21:02.279842 master-0 kubenswrapper[7146]: I0318 13:21:02.279818 7146 scope.go:117] "RemoveContainer" containerID="a4b53bab35719b1de9b4d4e1f4c3fdf356bb114dd12ac3e84e5af4fe101ae6bf" Mar 18 13:21:02.300285 master-0 kubenswrapper[7146]: I0318 13:21:02.300247 7146 generic.go:334] "Generic (PLEG): container finished" podID="0213214b-693b-411b-8254-48d7826011eb" containerID="c078d45f41d868996e6ecf51daad3770f6b4c7185d981080d710f8cb1c0e4347" exitCode=0 Mar 18 13:21:02.300383 master-0 kubenswrapper[7146]: I0318 13:21:02.300321 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" event={"ID":"0213214b-693b-411b-8254-48d7826011eb","Type":"ContainerDied","Data":"c078d45f41d868996e6ecf51daad3770f6b4c7185d981080d710f8cb1c0e4347"} Mar 18 13:21:02.300688 master-0 kubenswrapper[7146]: I0318 13:21:02.300664 7146 scope.go:117] "RemoveContainer" containerID="c078d45f41d868996e6ecf51daad3770f6b4c7185d981080d710f8cb1c0e4347" Mar 18 13:21:02.306224 master-0 kubenswrapper[7146]: I0318 13:21:02.306174 7146 generic.go:334] "Generic (PLEG): container finished" podID="5bccf60c-5b07-4f40-8430-12bfb62661c7" containerID="a2ae2420b34ef246b54f0a6fe9ec2894bc3cd6d0edd11b8cc50a2c6c8fb9ff32" exitCode=0 Mar 18 13:21:02.306312 master-0 kubenswrapper[7146]: I0318 13:21:02.306254 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8" event={"ID":"5bccf60c-5b07-4f40-8430-12bfb62661c7","Type":"ContainerDied","Data":"a2ae2420b34ef246b54f0a6fe9ec2894bc3cd6d0edd11b8cc50a2c6c8fb9ff32"} Mar 18 13:21:02.306705 master-0 kubenswrapper[7146]: I0318 13:21:02.306676 7146 scope.go:117] "RemoveContainer" containerID="a2ae2420b34ef246b54f0a6fe9ec2894bc3cd6d0edd11b8cc50a2c6c8fb9ff32" Mar 18 13:21:02.309087 master-0 kubenswrapper[7146]: I0318 13:21:02.309053 7146 generic.go:334] "Generic (PLEG): container finished" podID="3a039fc2-b0af-4b2c-a884-1c274c08064d" containerID="d7e8c2fdb968a1130191a8765d10f0d71f285ef10fc757a0ab5ebbff82c6fcc5" exitCode=0 Mar 18 13:21:02.309157 master-0 kubenswrapper[7146]: I0318 13:21:02.309137 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" event={"ID":"3a039fc2-b0af-4b2c-a884-1c274c08064d","Type":"ContainerDied","Data":"d7e8c2fdb968a1130191a8765d10f0d71f285ef10fc757a0ab5ebbff82c6fcc5"} Mar 18 13:21:02.309490 master-0 kubenswrapper[7146]: I0318 13:21:02.309467 7146 scope.go:117] "RemoveContainer" containerID="d7e8c2fdb968a1130191a8765d10f0d71f285ef10fc757a0ab5ebbff82c6fcc5" Mar 18 13:21:02.311456 master-0 kubenswrapper[7146]: I0318 13:21:02.311418 7146 generic.go:334] "Generic (PLEG): container finished" podID="17adbc1a-f29c-4278-b29a-0cc3879b753f" containerID="ea098486f4dc00d516848689091052951444062d9e2ae5ef81e67aadee11ef6e" exitCode=0 Mar 18 13:21:02.311509 master-0 kubenswrapper[7146]: I0318 13:21:02.311481 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" event={"ID":"17adbc1a-f29c-4278-b29a-0cc3879b753f","Type":"ContainerDied","Data":"ea098486f4dc00d516848689091052951444062d9e2ae5ef81e67aadee11ef6e"} Mar 18 13:21:02.311989 master-0 kubenswrapper[7146]: I0318 13:21:02.311946 7146 scope.go:117] "RemoveContainer" containerID="ea098486f4dc00d516848689091052951444062d9e2ae5ef81e67aadee11ef6e" Mar 18 13:21:02.313512 master-0 kubenswrapper[7146]: I0318 13:21:02.313480 7146 generic.go:334] "Generic (PLEG): container finished" podID="73c93ee3-cf14-4fea-b2a7-ccfb56e55be4" containerID="ca9e7669e9cbda3d1efa1643b57ac236e8b9cc289164b306448a040fc87f9948" exitCode=0 Mar 18 13:21:02.313558 master-0 kubenswrapper[7146]: I0318 13:21:02.313539 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" event={"ID":"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4","Type":"ContainerDied","Data":"ca9e7669e9cbda3d1efa1643b57ac236e8b9cc289164b306448a040fc87f9948"} Mar 18 13:21:02.314479 master-0 kubenswrapper[7146]: I0318 13:21:02.314447 7146 scope.go:117] "RemoveContainer" containerID="ca9e7669e9cbda3d1efa1643b57ac236e8b9cc289164b306448a040fc87f9948" Mar 18 13:21:02.315233 master-0 kubenswrapper[7146]: I0318 13:21:02.315203 7146 generic.go:334] "Generic (PLEG): container finished" podID="c9a9baa5-9334-47dc-8d0c-eafc96a679b3" containerID="50dc217c7e050a83d8f94c0b071aa6cc499aaacdf4273693193aaa83fb657bb6" exitCode=0 Mar 18 13:21:02.315285 master-0 kubenswrapper[7146]: I0318 13:21:02.315265 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" event={"ID":"c9a9baa5-9334-47dc-8d0c-eafc96a679b3","Type":"ContainerDied","Data":"50dc217c7e050a83d8f94c0b071aa6cc499aaacdf4273693193aaa83fb657bb6"} Mar 18 13:21:02.315580 master-0 kubenswrapper[7146]: I0318 13:21:02.315551 7146 scope.go:117] "RemoveContainer" containerID="50dc217c7e050a83d8f94c0b071aa6cc499aaacdf4273693193aaa83fb657bb6" Mar 18 13:21:02.319826 master-0 kubenswrapper[7146]: I0318 13:21:02.319789 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-nf22v_d2e2ef3a-a6e9-44dc-93c7-9f533e75502a/machine-api-operator/0.log" Mar 18 13:21:02.321210 master-0 kubenswrapper[7146]: I0318 13:21:02.321169 7146 generic.go:334] "Generic (PLEG): container finished" podID="d2e2ef3a-a6e9-44dc-93c7-9f533e75502a" containerID="35a6e219e9c2c306481d98d16c4ce589a46a92dae3b8a5616cb81c85790b7339" exitCode=255 Mar 18 13:21:02.321271 master-0 kubenswrapper[7146]: I0318 13:21:02.321236 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" event={"ID":"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a","Type":"ContainerDied","Data":"35a6e219e9c2c306481d98d16c4ce589a46a92dae3b8a5616cb81c85790b7339"} Mar 18 13:21:02.321818 master-0 kubenswrapper[7146]: I0318 13:21:02.321627 7146 scope.go:117] "RemoveContainer" containerID="35a6e219e9c2c306481d98d16c4ce589a46a92dae3b8a5616cb81c85790b7339" Mar 18 13:21:02.334261 master-0 kubenswrapper[7146]: I0318 13:21:02.332563 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:02.334261 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:02.334261 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:02.334261 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:02.334261 master-0 kubenswrapper[7146]: I0318 13:21:02.332610 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:02.336587 master-0 kubenswrapper[7146]: I0318 13:21:02.336554 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-kbpvr_36db10b8-33a2-4b54-85e2-9809eb6bc37d/package-server-manager/0.log" Mar 18 13:21:02.337334 master-0 kubenswrapper[7146]: I0318 13:21:02.337281 7146 generic.go:334] "Generic (PLEG): container finished" podID="36db10b8-33a2-4b54-85e2-9809eb6bc37d" containerID="763c041e89e36c29391b2cb35cd74d0ff6b0e6c63f07f02d238f792452bdf127" exitCode=1 Mar 18 13:21:02.337444 master-0 kubenswrapper[7146]: I0318 13:21:02.337372 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" event={"ID":"36db10b8-33a2-4b54-85e2-9809eb6bc37d","Type":"ContainerDied","Data":"763c041e89e36c29391b2cb35cd74d0ff6b0e6c63f07f02d238f792452bdf127"} Mar 18 13:21:02.337855 master-0 kubenswrapper[7146]: I0318 13:21:02.337806 7146 scope.go:117] "RemoveContainer" containerID="763c041e89e36c29391b2cb35cd74d0ff6b0e6c63f07f02d238f792452bdf127" Mar 18 13:21:02.339738 master-0 kubenswrapper[7146]: I0318 13:21:02.339028 7146 generic.go:334] "Generic (PLEG): container finished" podID="1ad580a2-7f58-4d66-adad-0a53d9777655" containerID="9d80034b295c4c336556d93672546628c76e7f2de665797ca7d2385c75fae222" exitCode=0 Mar 18 13:21:02.339738 master-0 kubenswrapper[7146]: I0318 13:21:02.339076 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" event={"ID":"1ad580a2-7f58-4d66-adad-0a53d9777655","Type":"ContainerDied","Data":"9d80034b295c4c336556d93672546628c76e7f2de665797ca7d2385c75fae222"} Mar 18 13:21:02.339738 master-0 kubenswrapper[7146]: I0318 13:21:02.339142 7146 scope.go:117] "RemoveContainer" containerID="efc42902b5c4767324208b71a30ab164f9e409ceb38a1c7d04d92fd8042f56d6" Mar 18 13:21:02.339738 master-0 kubenswrapper[7146]: I0318 13:21:02.339394 7146 scope.go:117] "RemoveContainer" containerID="9d80034b295c4c336556d93672546628c76e7f2de665797ca7d2385c75fae222" Mar 18 13:21:02.342584 master-0 kubenswrapper[7146]: I0318 13:21:02.342546 7146 generic.go:334] "Generic (PLEG): container finished" podID="1bf0ea4e-8b08-488f-b252-39580f46b756" containerID="cdeecfaffa91bced4d378bfbb335379410c275c90260acdb4404f15430b5fb3b" exitCode=0 Mar 18 13:21:02.342698 master-0 kubenswrapper[7146]: I0318 13:21:02.342609 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" event={"ID":"1bf0ea4e-8b08-488f-b252-39580f46b756","Type":"ContainerDied","Data":"cdeecfaffa91bced4d378bfbb335379410c275c90260acdb4404f15430b5fb3b"} Mar 18 13:21:02.343128 master-0 kubenswrapper[7146]: I0318 13:21:02.343102 7146 scope.go:117] "RemoveContainer" containerID="cdeecfaffa91bced4d378bfbb335379410c275c90260acdb4404f15430b5fb3b" Mar 18 13:21:02.428854 master-0 kubenswrapper[7146]: I0318 13:21:02.428818 7146 scope.go:117] "RemoveContainer" containerID="fd0bf4a4bcfb53e14fbaa9e4b5ac94436e182002bb238e07513655ae02a57f1d" Mar 18 13:21:03.118847 master-0 kubenswrapper[7146]: I0318 13:21:03.116840 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:21:03.329847 master-0 kubenswrapper[7146]: I0318 13:21:03.329082 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:03.329847 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:03.329847 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:03.329847 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:03.329847 master-0 kubenswrapper[7146]: I0318 13:21:03.329175 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:03.365792 master-0 kubenswrapper[7146]: I0318 13:21:03.365697 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-nf22v_d2e2ef3a-a6e9-44dc-93c7-9f533e75502a/machine-api-operator/0.log" Mar 18 13:21:03.367609 master-0 kubenswrapper[7146]: I0318 13:21:03.367539 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" event={"ID":"cb471665-2b07-48df-9881-3fb663390b23","Type":"ContainerStarted","Data":"387b25c5bcafd7f80a47dc9767b81d1036fef3db69192e819129d0ad10b5e7d2"} Mar 18 13:21:03.367898 master-0 kubenswrapper[7146]: I0318 13:21:03.367828 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" event={"ID":"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4","Type":"ContainerStarted","Data":"2428c16782c4b75813483a089b4de5b0d77508bb1ef7d27b1a484f112ac0527b"} Mar 18 13:21:03.368045 master-0 kubenswrapper[7146]: I0318 13:21:03.367898 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" event={"ID":"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a","Type":"ContainerStarted","Data":"ede624914ba0c4d8dde9c97860f999e51360db9fee68b790b81713e080030e57"} Mar 18 13:21:03.368045 master-0 kubenswrapper[7146]: I0318 13:21:03.368012 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" event={"ID":"3a039fc2-b0af-4b2c-a884-1c274c08064d","Type":"ContainerStarted","Data":"7bbfea136d5638e11df404b3c90a6e4bb7a4706a041d32618b2203b6b656edfd"} Mar 18 13:21:03.371707 master-0 kubenswrapper[7146]: I0318 13:21:03.371125 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" event={"ID":"0213214b-693b-411b-8254-48d7826011eb","Type":"ContainerStarted","Data":"661c3bb10e2521fe20a10e5fb07df9df3af85336e6dda238f88d912cc35e4a9f"} Mar 18 13:21:03.371707 master-0 kubenswrapper[7146]: I0318 13:21:03.371280 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:21:03.374699 master-0 kubenswrapper[7146]: I0318 13:21:03.374627 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8" event={"ID":"5bccf60c-5b07-4f40-8430-12bfb62661c7","Type":"ContainerStarted","Data":"21fb1d734a1c571f9701759050968a25b488abb45837b2b4b91dee59a361481e"} Mar 18 13:21:03.378381 master-0 kubenswrapper[7146]: I0318 13:21:03.378327 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-q8vxr_bd033b5b-af07-4e69-9a5c-46f7c9bde95a/cluster-autoscaler-operator/0.log" Mar 18 13:21:03.379084 master-0 kubenswrapper[7146]: I0318 13:21:03.378999 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" event={"ID":"bd033b5b-af07-4e69-9a5c-46f7c9bde95a","Type":"ContainerStarted","Data":"b80e4b61b6717569b162bda872941db605fef62c509a7a1eb43964fd51004d63"} Mar 18 13:21:03.381802 master-0 kubenswrapper[7146]: I0318 13:21:03.381747 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" event={"ID":"1ad580a2-7f58-4d66-adad-0a53d9777655","Type":"ContainerStarted","Data":"9f3e7226df963755f99270da262394b26c264449e073863d948d0e88e4076502"} Mar 18 13:21:03.383826 master-0 kubenswrapper[7146]: I0318 13:21:03.383774 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" event={"ID":"65cfa12a-0711-4fba-8859-73a3f8f250a9","Type":"ContainerStarted","Data":"63d70024e5607dd0f325c1dca25a80e4589b1a262a3d3f4834d611ea24de9a2b"} Mar 18 13:21:03.384608 master-0 kubenswrapper[7146]: I0318 13:21:03.384270 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:21:03.388326 master-0 kubenswrapper[7146]: I0318 13:21:03.387473 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-p6tvz_369e9689-e2f6-4276-b096-8db094f8d6ae/cluster-node-tuning-operator/0.log" Mar 18 13:21:03.388326 master-0 kubenswrapper[7146]: I0318 13:21:03.387649 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" event={"ID":"369e9689-e2f6-4276-b096-8db094f8d6ae","Type":"ContainerStarted","Data":"a03b424d0ef07de061cca5fc1b0ebc8120bc600196be8bc03a50d77dcb94ab33"} Mar 18 13:21:03.390569 master-0 kubenswrapper[7146]: I0318 13:21:03.390507 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:21:03.391462 master-0 kubenswrapper[7146]: I0318 13:21:03.391425 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/cluster-policy-controller/3.log" Mar 18 13:21:03.392785 master-0 kubenswrapper[7146]: I0318 13:21:03.392739 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager-cert-syncer/0.log" Mar 18 13:21:03.393343 master-0 kubenswrapper[7146]: I0318 13:21:03.393309 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager/0.log" Mar 18 13:21:03.393407 master-0 kubenswrapper[7146]: I0318 13:21:03.393362 7146 generic.go:334] "Generic (PLEG): container finished" podID="f88d0f62c0688ab1909dc97f30d381b9" containerID="9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5" exitCode=1 Mar 18 13:21:03.393447 master-0 kubenswrapper[7146]: I0318 13:21:03.393424 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f88d0f62c0688ab1909dc97f30d381b9","Type":"ContainerDied","Data":"9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5"} Mar 18 13:21:03.394086 master-0 kubenswrapper[7146]: I0318 13:21:03.394054 7146 scope.go:117] "RemoveContainer" containerID="5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b" Mar 18 13:21:03.394086 master-0 kubenswrapper[7146]: I0318 13:21:03.394076 7146 scope.go:117] "RemoveContainer" containerID="9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5" Mar 18 13:21:03.396631 master-0 kubenswrapper[7146]: I0318 13:21:03.396578 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" event={"ID":"16a930da-d793-486f-bcef-cf042d3c427d","Type":"ContainerStarted","Data":"3b29ec9dba6738278a322a421a3a2ce297ee8a823638cea54867dbec84f6bf17"} Mar 18 13:21:03.400726 master-0 kubenswrapper[7146]: I0318 13:21:03.400655 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" event={"ID":"1bf0ea4e-8b08-488f-b252-39580f46b756","Type":"ContainerStarted","Data":"62803bb394be0d1c3c0fed5e5f4d471cbb21f844411072960f01462dc7e904be"} Mar 18 13:21:03.404189 master-0 kubenswrapper[7146]: I0318 13:21:03.404143 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" event={"ID":"c9a9baa5-9334-47dc-8d0c-eafc96a679b3","Type":"ContainerStarted","Data":"e3465f02191bc8eaecee17b140699bcfc4b0458480265d45c9243adddb6a85d5"} Mar 18 13:21:03.405895 master-0 kubenswrapper[7146]: I0318 13:21:03.405820 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-kbpvr_36db10b8-33a2-4b54-85e2-9809eb6bc37d/package-server-manager/0.log" Mar 18 13:21:03.407319 master-0 kubenswrapper[7146]: I0318 13:21:03.406756 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" event={"ID":"36db10b8-33a2-4b54-85e2-9809eb6bc37d","Type":"ContainerStarted","Data":"4eb8daaaa0817b42d78255024a347ded7ebfe6a2715bb11699d6a24317d81180"} Mar 18 13:21:03.407319 master-0 kubenswrapper[7146]: I0318 13:21:03.407122 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:21:03.410879 master-0 kubenswrapper[7146]: I0318 13:21:03.410844 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" event={"ID":"17adbc1a-f29c-4278-b29a-0cc3879b753f","Type":"ContainerStarted","Data":"440cf3503d24c446750ef9cb88e48aedc51484047c3b30c7de3626feb4d915e0"} Mar 18 13:21:03.637217 master-0 kubenswrapper[7146]: I0318 13:21:03.637153 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Mar 18 13:21:03.637383 master-0 kubenswrapper[7146]: E0318 13:21:03.637375 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="956513bf-3b98-4b0d-aca7-ccc3fdf8ae12" containerName="kube-multus-additional-cni-plugins" Mar 18 13:21:03.637435 master-0 kubenswrapper[7146]: I0318 13:21:03.637387 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="956513bf-3b98-4b0d-aca7-ccc3fdf8ae12" containerName="kube-multus-additional-cni-plugins" Mar 18 13:21:03.637435 master-0 kubenswrapper[7146]: E0318 13:21:03.637409 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5879ced8-4ac1-40e3-bf93-38b8a7497823" containerName="installer" Mar 18 13:21:03.637435 master-0 kubenswrapper[7146]: I0318 13:21:03.637415 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="5879ced8-4ac1-40e3-bf93-38b8a7497823" containerName="installer" Mar 18 13:21:03.637435 master-0 kubenswrapper[7146]: E0318 13:21:03.637434 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fca2c29-3791-43b8-97f1-a9a6d58ec92d" containerName="installer" Mar 18 13:21:03.637544 master-0 kubenswrapper[7146]: I0318 13:21:03.637441 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fca2c29-3791-43b8-97f1-a9a6d58ec92d" containerName="installer" Mar 18 13:21:03.638472 master-0 kubenswrapper[7146]: I0318 13:21:03.638452 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fca2c29-3791-43b8-97f1-a9a6d58ec92d" containerName="installer" Mar 18 13:21:03.638540 master-0 kubenswrapper[7146]: I0318 13:21:03.638476 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="956513bf-3b98-4b0d-aca7-ccc3fdf8ae12" containerName="kube-multus-additional-cni-plugins" Mar 18 13:21:03.638540 master-0 kubenswrapper[7146]: I0318 13:21:03.638489 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="5879ced8-4ac1-40e3-bf93-38b8a7497823" containerName="installer" Mar 18 13:21:03.638929 master-0 kubenswrapper[7146]: I0318 13:21:03.638892 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 18 13:21:03.644265 master-0 kubenswrapper[7146]: I0318 13:21:03.644031 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 13:21:03.644265 master-0 kubenswrapper[7146]: I0318 13:21:03.644207 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-kfzqm" Mar 18 13:21:03.647584 master-0 kubenswrapper[7146]: I0318 13:21:03.647533 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Mar 18 13:21:03.660968 master-0 kubenswrapper[7146]: E0318 13:21:03.660920 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(f88d0f62c0688ab1909dc97f30d381b9)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" Mar 18 13:21:03.731299 master-0 kubenswrapper[7146]: I0318 13:21:03.731218 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2669bc40-9271-4494-9e21-290cd4383b05-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"2669bc40-9271-4494-9e21-290cd4383b05\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 18 13:21:03.731507 master-0 kubenswrapper[7146]: I0318 13:21:03.731358 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2669bc40-9271-4494-9e21-290cd4383b05-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"2669bc40-9271-4494-9e21-290cd4383b05\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 18 13:21:03.731507 master-0 kubenswrapper[7146]: I0318 13:21:03.731424 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2669bc40-9271-4494-9e21-290cd4383b05-kube-api-access\") pod \"installer-5-retry-1-master-0\" (UID: \"2669bc40-9271-4494-9e21-290cd4383b05\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 18 13:21:03.832738 master-0 kubenswrapper[7146]: I0318 13:21:03.832658 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2669bc40-9271-4494-9e21-290cd4383b05-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"2669bc40-9271-4494-9e21-290cd4383b05\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 18 13:21:03.832738 master-0 kubenswrapper[7146]: I0318 13:21:03.832723 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2669bc40-9271-4494-9e21-290cd4383b05-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"2669bc40-9271-4494-9e21-290cd4383b05\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 18 13:21:03.833041 master-0 kubenswrapper[7146]: I0318 13:21:03.832754 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2669bc40-9271-4494-9e21-290cd4383b05-kube-api-access\") pod \"installer-5-retry-1-master-0\" (UID: \"2669bc40-9271-4494-9e21-290cd4383b05\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 18 13:21:03.833041 master-0 kubenswrapper[7146]: I0318 13:21:03.832808 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2669bc40-9271-4494-9e21-290cd4383b05-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"2669bc40-9271-4494-9e21-290cd4383b05\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 18 13:21:03.833041 master-0 kubenswrapper[7146]: I0318 13:21:03.832843 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2669bc40-9271-4494-9e21-290cd4383b05-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"2669bc40-9271-4494-9e21-290cd4383b05\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 18 13:21:03.850859 master-0 kubenswrapper[7146]: I0318 13:21:03.850786 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2669bc40-9271-4494-9e21-290cd4383b05-kube-api-access\") pod \"installer-5-retry-1-master-0\" (UID: \"2669bc40-9271-4494-9e21-290cd4383b05\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 18 13:21:03.977556 master-0 kubenswrapper[7146]: I0318 13:21:03.977463 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 18 13:21:04.050481 master-0 kubenswrapper[7146]: E0318 13:21:04.050415 7146 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes master-0)" Mar 18 13:21:04.327857 master-0 kubenswrapper[7146]: I0318 13:21:04.327719 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:04.327857 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:04.327857 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:04.327857 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:04.327857 master-0 kubenswrapper[7146]: I0318 13:21:04.327813 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:04.393483 master-0 kubenswrapper[7146]: I0318 13:21:04.393362 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Mar 18 13:21:04.419855 master-0 kubenswrapper[7146]: I0318 13:21:04.419804 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/cluster-policy-controller/3.log" Mar 18 13:21:04.421127 master-0 kubenswrapper[7146]: I0318 13:21:04.421099 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager-cert-syncer/0.log" Mar 18 13:21:04.421605 master-0 kubenswrapper[7146]: I0318 13:21:04.421575 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager/0.log" Mar 18 13:21:04.421655 master-0 kubenswrapper[7146]: I0318 13:21:04.421634 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f88d0f62c0688ab1909dc97f30d381b9","Type":"ContainerStarted","Data":"2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253"} Mar 18 13:21:04.422405 master-0 kubenswrapper[7146]: I0318 13:21:04.422248 7146 scope.go:117] "RemoveContainer" containerID="5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b" Mar 18 13:21:04.422511 master-0 kubenswrapper[7146]: E0318 13:21:04.422480 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(f88d0f62c0688ab1909dc97f30d381b9)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" Mar 18 13:21:04.424348 master-0 kubenswrapper[7146]: I0318 13:21:04.424294 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"2669bc40-9271-4494-9e21-290cd4383b05","Type":"ContainerStarted","Data":"a577631cf83d4d696a51ef5800c1380f23cc2dfd5a5c79567b96e2414f25b3b1"} Mar 18 13:21:05.327867 master-0 kubenswrapper[7146]: I0318 13:21:05.327787 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:05.327867 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:05.327867 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:05.327867 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:05.328297 master-0 kubenswrapper[7146]: I0318 13:21:05.327878 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:05.358484 master-0 kubenswrapper[7146]: I0318 13:21:05.358415 7146 scope.go:117] "RemoveContainer" containerID="ddcbf11a00d3d2b2cc8dba953e8ea411de73bf086be68e4a972c789cfa038823" Mar 18 13:21:05.358799 master-0 kubenswrapper[7146]: E0318 13:21:05.358763 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-wkw7f_openshift-cluster-storage-operator(1ad93612-ab12-4b30-984f-119e1b924a84)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" podUID="1ad93612-ab12-4b30-984f-119e1b924a84" Mar 18 13:21:05.432062 master-0 kubenswrapper[7146]: I0318 13:21:05.432007 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"2669bc40-9271-4494-9e21-290cd4383b05","Type":"ContainerStarted","Data":"da68cebc5e87d23d463a0c9379a0a5014fb73cbd24809cddd09f3686c920cb75"} Mar 18 13:21:05.456020 master-0 kubenswrapper[7146]: I0318 13:21:05.455913 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" podStartSLOduration=2.455887735 podStartE2EDuration="2.455887735s" podCreationTimestamp="2026-03-18 13:21:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:21:05.447325984 +0000 UTC m=+774.255543365" watchObservedRunningTime="2026-03-18 13:21:05.455887735 +0000 UTC m=+774.264105116" Mar 18 13:21:06.117331 master-0 kubenswrapper[7146]: I0318 13:21:06.117231 7146 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-c7nh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 18 13:21:06.117331 master-0 kubenswrapper[7146]: I0318 13:21:06.117312 7146 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" podUID="0213214b-693b-411b-8254-48d7826011eb" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 18 13:21:06.328121 master-0 kubenswrapper[7146]: I0318 13:21:06.328026 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:06.328121 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:06.328121 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:06.328121 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:06.328121 master-0 kubenswrapper[7146]: I0318 13:21:06.328104 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:07.182186 master-0 kubenswrapper[7146]: I0318 13:21:07.182108 7146 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-c7nh9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 18 13:21:07.182875 master-0 kubenswrapper[7146]: I0318 13:21:07.182180 7146 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" podUID="0213214b-693b-411b-8254-48d7826011eb" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 18 13:21:07.328028 master-0 kubenswrapper[7146]: I0318 13:21:07.327984 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:07.328028 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:07.328028 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:07.328028 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:07.328363 master-0 kubenswrapper[7146]: I0318 13:21:07.328337 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:08.327986 master-0 kubenswrapper[7146]: I0318 13:21:08.327916 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:08.327986 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:08.327986 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:08.327986 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:08.328925 master-0 kubenswrapper[7146]: I0318 13:21:08.328009 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:09.117216 master-0 kubenswrapper[7146]: I0318 13:21:09.117127 7146 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-c7nh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 18 13:21:09.117216 master-0 kubenswrapper[7146]: I0318 13:21:09.117198 7146 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" podUID="0213214b-693b-411b-8254-48d7826011eb" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 18 13:21:09.328441 master-0 kubenswrapper[7146]: I0318 13:21:09.328388 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:09.328441 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:09.328441 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:09.328441 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:09.329247 master-0 kubenswrapper[7146]: I0318 13:21:09.328474 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:09.358280 master-0 kubenswrapper[7146]: I0318 13:21:09.358204 7146 scope.go:117] "RemoveContainer" containerID="5b4c84f643c308b4da498c0b191698ddaa3218b818a04462557cc1d1c093013c" Mar 18 13:21:09.358783 master-0 kubenswrapper[7146]: E0318 13:21:09.358732 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-xwqsb_openshift-ingress-operator(f2b92a53-0b61-4e1d-a306-f9a498e48b38)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" podUID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" Mar 18 13:21:10.183051 master-0 kubenswrapper[7146]: I0318 13:21:10.182922 7146 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-c7nh9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 18 13:21:10.183295 master-0 kubenswrapper[7146]: I0318 13:21:10.183052 7146 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" podUID="0213214b-693b-411b-8254-48d7826011eb" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 18 13:21:10.328643 master-0 kubenswrapper[7146]: I0318 13:21:10.328595 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:10.328643 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:10.328643 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:10.328643 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:10.329214 master-0 kubenswrapper[7146]: I0318 13:21:10.328660 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:11.327600 master-0 kubenswrapper[7146]: I0318 13:21:11.327530 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:11.327600 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:11.327600 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:11.327600 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:11.328061 master-0 kubenswrapper[7146]: I0318 13:21:11.327606 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:12.117275 master-0 kubenswrapper[7146]: I0318 13:21:12.117214 7146 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-c7nh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 18 13:21:12.117739 master-0 kubenswrapper[7146]: I0318 13:21:12.117273 7146 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" podUID="0213214b-693b-411b-8254-48d7826011eb" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 18 13:21:12.327775 master-0 kubenswrapper[7146]: I0318 13:21:12.327731 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:12.327775 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:12.327775 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:12.327775 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:12.328168 master-0 kubenswrapper[7146]: I0318 13:21:12.328144 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:13.182973 master-0 kubenswrapper[7146]: I0318 13:21:13.182901 7146 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-c7nh9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 18 13:21:13.183451 master-0 kubenswrapper[7146]: I0318 13:21:13.183030 7146 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" podUID="0213214b-693b-411b-8254-48d7826011eb" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 18 13:21:13.183451 master-0 kubenswrapper[7146]: I0318 13:21:13.183116 7146 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:21:13.183785 master-0 kubenswrapper[7146]: I0318 13:21:13.183748 7146 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"661c3bb10e2521fe20a10e5fb07df9df3af85336e6dda238f88d912cc35e4a9f"} pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 18 13:21:13.183841 master-0 kubenswrapper[7146]: I0318 13:21:13.183791 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" podUID="0213214b-693b-411b-8254-48d7826011eb" containerName="openshift-config-operator" containerID="cri-o://661c3bb10e2521fe20a10e5fb07df9df3af85336e6dda238f88d912cc35e4a9f" gracePeriod=30 Mar 18 13:21:13.183876 master-0 kubenswrapper[7146]: I0318 13:21:13.183836 7146 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-c7nh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 18 13:21:13.183964 master-0 kubenswrapper[7146]: I0318 13:21:13.183916 7146 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" podUID="0213214b-693b-411b-8254-48d7826011eb" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 18 13:21:13.328034 master-0 kubenswrapper[7146]: I0318 13:21:13.327955 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:13.328034 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:13.328034 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:13.328034 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:13.328034 master-0 kubenswrapper[7146]: I0318 13:21:13.328024 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:14.328699 master-0 kubenswrapper[7146]: I0318 13:21:14.328603 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:14.328699 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:14.328699 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:14.328699 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:14.329287 master-0 kubenswrapper[7146]: I0318 13:21:14.328700 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:14.490862 master-0 kubenswrapper[7146]: I0318 13:21:14.490774 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-c7nh9_0213214b-693b-411b-8254-48d7826011eb/openshift-config-operator/1.log" Mar 18 13:21:14.491596 master-0 kubenswrapper[7146]: I0318 13:21:14.491544 7146 generic.go:334] "Generic (PLEG): container finished" podID="0213214b-693b-411b-8254-48d7826011eb" containerID="661c3bb10e2521fe20a10e5fb07df9df3af85336e6dda238f88d912cc35e4a9f" exitCode=255 Mar 18 13:21:14.491660 master-0 kubenswrapper[7146]: I0318 13:21:14.491604 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" event={"ID":"0213214b-693b-411b-8254-48d7826011eb","Type":"ContainerDied","Data":"661c3bb10e2521fe20a10e5fb07df9df3af85336e6dda238f88d912cc35e4a9f"} Mar 18 13:21:14.491660 master-0 kubenswrapper[7146]: I0318 13:21:14.491646 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" event={"ID":"0213214b-693b-411b-8254-48d7826011eb","Type":"ContainerStarted","Data":"d709ac01fbab0b75202fc2a64c4d881fb4b80dbd9d9648bed585f8095f2c4608"} Mar 18 13:21:14.491727 master-0 kubenswrapper[7146]: I0318 13:21:14.491664 7146 scope.go:117] "RemoveContainer" containerID="c078d45f41d868996e6ecf51daad3770f6b4c7185d981080d710f8cb1c0e4347" Mar 18 13:21:14.492183 master-0 kubenswrapper[7146]: I0318 13:21:14.492112 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:21:15.329366 master-0 kubenswrapper[7146]: I0318 13:21:15.329287 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:15.329366 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:15.329366 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:15.329366 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:15.330172 master-0 kubenswrapper[7146]: I0318 13:21:15.329373 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:15.498595 master-0 kubenswrapper[7146]: I0318 13:21:15.498557 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-c7nh9_0213214b-693b-411b-8254-48d7826011eb/openshift-config-operator/1.log" Mar 18 13:21:16.182666 master-0 kubenswrapper[7146]: I0318 13:21:16.182584 7146 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-c7nh9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 18 13:21:16.182666 master-0 kubenswrapper[7146]: I0318 13:21:16.182643 7146 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" podUID="0213214b-693b-411b-8254-48d7826011eb" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 18 13:21:16.327480 master-0 kubenswrapper[7146]: I0318 13:21:16.327426 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:16.327480 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:16.327480 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:16.327480 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:16.327761 master-0 kubenswrapper[7146]: I0318 13:21:16.327491 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:16.358263 master-0 kubenswrapper[7146]: I0318 13:21:16.358199 7146 scope.go:117] "RemoveContainer" containerID="5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b" Mar 18 13:21:17.327625 master-0 kubenswrapper[7146]: I0318 13:21:17.327554 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:17.327625 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:17.327625 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:17.327625 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:17.327625 master-0 kubenswrapper[7146]: I0318 13:21:17.327622 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:17.518090 master-0 kubenswrapper[7146]: I0318 13:21:17.518047 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/cluster-policy-controller/3.log" Mar 18 13:21:17.522005 master-0 kubenswrapper[7146]: I0318 13:21:17.520726 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager-cert-syncer/0.log" Mar 18 13:21:17.522005 master-0 kubenswrapper[7146]: I0318 13:21:17.521452 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager/0.log" Mar 18 13:21:17.522005 master-0 kubenswrapper[7146]: I0318 13:21:17.521501 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f88d0f62c0688ab1909dc97f30d381b9","Type":"ContainerStarted","Data":"88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930"} Mar 18 13:21:18.116929 master-0 kubenswrapper[7146]: I0318 13:21:18.116839 7146 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-c7nh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 18 13:21:18.117187 master-0 kubenswrapper[7146]: I0318 13:21:18.116928 7146 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" podUID="0213214b-693b-411b-8254-48d7826011eb" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 18 13:21:18.328637 master-0 kubenswrapper[7146]: I0318 13:21:18.328549 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:18.328637 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:18.328637 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:18.328637 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:18.328902 master-0 kubenswrapper[7146]: I0318 13:21:18.328650 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:18.527836 master-0 kubenswrapper[7146]: I0318 13:21:18.527799 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-7w5g8_a01c92f5-7938-437d-8262-11598bd8023c/cluster-baremetal-operator/2.log" Mar 18 13:21:18.528332 master-0 kubenswrapper[7146]: I0318 13:21:18.528184 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-7w5g8_a01c92f5-7938-437d-8262-11598bd8023c/cluster-baremetal-operator/1.log" Mar 18 13:21:18.528591 master-0 kubenswrapper[7146]: I0318 13:21:18.528559 7146 generic.go:334] "Generic (PLEG): container finished" podID="a01c92f5-7938-437d-8262-11598bd8023c" containerID="3fe2b30d3a88bc253d1cf4b9fbf09e7d5bc69a80e3d0a14ba44ecbd5f6425a1e" exitCode=1 Mar 18 13:21:18.528637 master-0 kubenswrapper[7146]: I0318 13:21:18.528603 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" event={"ID":"a01c92f5-7938-437d-8262-11598bd8023c","Type":"ContainerDied","Data":"3fe2b30d3a88bc253d1cf4b9fbf09e7d5bc69a80e3d0a14ba44ecbd5f6425a1e"} Mar 18 13:21:18.528682 master-0 kubenswrapper[7146]: I0318 13:21:18.528642 7146 scope.go:117] "RemoveContainer" containerID="d4c91e969faf5650da5d5727f2dfc66f398fbfef974094943a2e96586ef2e4ac" Mar 18 13:21:18.529172 master-0 kubenswrapper[7146]: I0318 13:21:18.529152 7146 scope.go:117] "RemoveContainer" containerID="3fe2b30d3a88bc253d1cf4b9fbf09e7d5bc69a80e3d0a14ba44ecbd5f6425a1e" Mar 18 13:21:18.529428 master-0 kubenswrapper[7146]: E0318 13:21:18.529392 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-7w5g8_openshift-machine-api(a01c92f5-7938-437d-8262-11598bd8023c)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" podUID="a01c92f5-7938-437d-8262-11598bd8023c" Mar 18 13:21:19.327740 master-0 kubenswrapper[7146]: I0318 13:21:19.327655 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:19.327740 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:19.327740 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:19.327740 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:19.328199 master-0 kubenswrapper[7146]: I0318 13:21:19.327754 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:19.536253 master-0 kubenswrapper[7146]: I0318 13:21:19.536203 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-7w5g8_a01c92f5-7938-437d-8262-11598bd8023c/cluster-baremetal-operator/2.log" Mar 18 13:21:20.327789 master-0 kubenswrapper[7146]: I0318 13:21:20.327711 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:20.327789 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:20.327789 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:20.327789 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:20.328155 master-0 kubenswrapper[7146]: I0318 13:21:20.327793 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:20.358075 master-0 kubenswrapper[7146]: I0318 13:21:20.358020 7146 scope.go:117] "RemoveContainer" containerID="ddcbf11a00d3d2b2cc8dba953e8ea411de73bf086be68e4a972c789cfa038823" Mar 18 13:21:20.545121 master-0 kubenswrapper[7146]: I0318 13:21:20.545066 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-wkw7f_1ad93612-ab12-4b30-984f-119e1b924a84/snapshot-controller/3.log" Mar 18 13:21:20.545121 master-0 kubenswrapper[7146]: I0318 13:21:20.545126 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" event={"ID":"1ad93612-ab12-4b30-984f-119e1b924a84","Type":"ContainerStarted","Data":"daf4685a50571e06043111d527842084d63378bd98e5f04730643358d52aad25"} Mar 18 13:21:21.123298 master-0 kubenswrapper[7146]: I0318 13:21:21.123077 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:21:21.327475 master-0 kubenswrapper[7146]: I0318 13:21:21.327382 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:21.327475 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:21.327475 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:21.327475 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:21.327850 master-0 kubenswrapper[7146]: I0318 13:21:21.327506 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:21.557220 master-0 kubenswrapper[7146]: I0318 13:21:21.556010 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:21:21.557220 master-0 kubenswrapper[7146]: I0318 13:21:21.556778 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:21:21.566081 master-0 kubenswrapper[7146]: I0318 13:21:21.566006 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:21:21.567149 master-0 kubenswrapper[7146]: I0318 13:21:21.567093 7146 generic.go:334] "Generic (PLEG): container finished" podID="e4d0b174-33e4-46ee-863b-b5cc2a271b85" containerID="1b8157f4c23747a17d99cd1a75b5fd67d7d1923b9d3c78ebf701ed19d3b1c48e" exitCode=0 Mar 18 13:21:21.567231 master-0 kubenswrapper[7146]: I0318 13:21:21.567154 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" event={"ID":"e4d0b174-33e4-46ee-863b-b5cc2a271b85","Type":"ContainerDied","Data":"1b8157f4c23747a17d99cd1a75b5fd67d7d1923b9d3c78ebf701ed19d3b1c48e"} Mar 18 13:21:21.567729 master-0 kubenswrapper[7146]: I0318 13:21:21.567699 7146 scope.go:117] "RemoveContainer" containerID="1b8157f4c23747a17d99cd1a75b5fd67d7d1923b9d3c78ebf701ed19d3b1c48e" Mar 18 13:21:22.327599 master-0 kubenswrapper[7146]: I0318 13:21:22.327513 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:22.327599 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:22.327599 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:22.327599 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:22.327868 master-0 kubenswrapper[7146]: I0318 13:21:22.327601 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:22.577704 master-0 kubenswrapper[7146]: I0318 13:21:22.577586 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" event={"ID":"e4d0b174-33e4-46ee-863b-b5cc2a271b85","Type":"ContainerStarted","Data":"518d24be41bad81d65b7ad2b74d07264a532fc7aeb5fee7deec75f3fd2361f13"} Mar 18 13:21:23.328063 master-0 kubenswrapper[7146]: I0318 13:21:23.327911 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:23.328063 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:23.328063 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:23.328063 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:23.328063 master-0 kubenswrapper[7146]: I0318 13:21:23.328018 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:23.358356 master-0 kubenswrapper[7146]: I0318 13:21:23.358308 7146 scope.go:117] "RemoveContainer" containerID="5b4c84f643c308b4da498c0b191698ddaa3218b818a04462557cc1d1c093013c" Mar 18 13:21:23.358617 master-0 kubenswrapper[7146]: E0318 13:21:23.358585 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-xwqsb_openshift-ingress-operator(f2b92a53-0b61-4e1d-a306-f9a498e48b38)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" podUID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" Mar 18 13:21:24.327733 master-0 kubenswrapper[7146]: I0318 13:21:24.327686 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:24.327733 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:24.327733 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:24.327733 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:24.328301 master-0 kubenswrapper[7146]: I0318 13:21:24.327745 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:25.328453 master-0 kubenswrapper[7146]: I0318 13:21:25.328394 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:25.328453 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:25.328453 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:25.328453 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:25.329074 master-0 kubenswrapper[7146]: I0318 13:21:25.328468 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:26.327448 master-0 kubenswrapper[7146]: I0318 13:21:26.327362 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:26.327448 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:26.327448 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:26.327448 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:26.327448 master-0 kubenswrapper[7146]: I0318 13:21:26.327432 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:27.328240 master-0 kubenswrapper[7146]: I0318 13:21:27.328189 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:27.328240 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:27.328240 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:27.328240 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:27.328891 master-0 kubenswrapper[7146]: I0318 13:21:27.328261 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:28.327595 master-0 kubenswrapper[7146]: I0318 13:21:28.327524 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:28.327595 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:28.327595 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:28.327595 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:28.327595 master-0 kubenswrapper[7146]: I0318 13:21:28.327594 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:29.199619 master-0 kubenswrapper[7146]: I0318 13:21:29.199553 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 13:21:29.200401 master-0 kubenswrapper[7146]: I0318 13:21:29.200374 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:21:29.202999 master-0 kubenswrapper[7146]: I0318 13:21:29.202883 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-62gbt" Mar 18 13:21:29.204325 master-0 kubenswrapper[7146]: I0318 13:21:29.204304 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 13:21:29.211872 master-0 kubenswrapper[7146]: I0318 13:21:29.211696 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 13:21:29.288928 master-0 kubenswrapper[7146]: I0318 13:21:29.288884 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:21:29.289159 master-0 kubenswrapper[7146]: I0318 13:21:29.288972 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-var-lock\") pod \"installer-3-master-0\" (UID: \"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:21:29.289159 master-0 kubenswrapper[7146]: I0318 13:21:29.289068 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:21:29.327949 master-0 kubenswrapper[7146]: I0318 13:21:29.327868 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:29.327949 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:29.327949 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:29.327949 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:29.328257 master-0 kubenswrapper[7146]: I0318 13:21:29.327950 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:29.390846 master-0 kubenswrapper[7146]: I0318 13:21:29.390735 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:21:29.390846 master-0 kubenswrapper[7146]: I0318 13:21:29.390826 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:21:29.391316 master-0 kubenswrapper[7146]: I0318 13:21:29.390878 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-var-lock\") pod \"installer-3-master-0\" (UID: \"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:21:29.391316 master-0 kubenswrapper[7146]: I0318 13:21:29.391053 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-var-lock\") pod \"installer-3-master-0\" (UID: \"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:21:29.391316 master-0 kubenswrapper[7146]: I0318 13:21:29.391101 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:21:29.412334 master-0 kubenswrapper[7146]: I0318 13:21:29.412236 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:21:29.528895 master-0 kubenswrapper[7146]: I0318 13:21:29.528737 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:21:30.118226 master-0 kubenswrapper[7146]: I0318 13:21:30.117268 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 13:21:30.125362 master-0 kubenswrapper[7146]: W0318 13:21:30.125273 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod89d262b4_b1a7_49b8_a8d2_1bb1ea671df8.slice/crio-161cd3706961d9a83285893d6e92c33d138d2abea441f91387021bb04fef5a38 WatchSource:0}: Error finding container 161cd3706961d9a83285893d6e92c33d138d2abea441f91387021bb04fef5a38: Status 404 returned error can't find the container with id 161cd3706961d9a83285893d6e92c33d138d2abea441f91387021bb04fef5a38 Mar 18 13:21:30.327866 master-0 kubenswrapper[7146]: I0318 13:21:30.327833 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:30.327866 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:30.327866 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:30.327866 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:30.328352 master-0 kubenswrapper[7146]: I0318 13:21:30.327890 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:30.377859 master-0 kubenswrapper[7146]: I0318 13:21:30.377749 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 13:21:30.626884 master-0 kubenswrapper[7146]: I0318 13:21:30.626823 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8","Type":"ContainerStarted","Data":"56a1ebe6b0097c7f125f082d76f61cc3fac21860bdfba2e3c6f543dc04756bf5"} Mar 18 13:21:30.627227 master-0 kubenswrapper[7146]: I0318 13:21:30.627198 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8","Type":"ContainerStarted","Data":"161cd3706961d9a83285893d6e92c33d138d2abea441f91387021bb04fef5a38"} Mar 18 13:21:30.731291 master-0 kubenswrapper[7146]: I0318 13:21:30.729036 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.72896558 podStartE2EDuration="728.96558ms" podCreationTimestamp="2026-03-18 13:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:21:30.702105454 +0000 UTC m=+799.510322845" watchObservedRunningTime="2026-03-18 13:21:30.72896558 +0000 UTC m=+799.537182951" Mar 18 13:21:30.731823 master-0 kubenswrapper[7146]: I0318 13:21:30.731753 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=1.731743858 podStartE2EDuration="1.731743858s" podCreationTimestamp="2026-03-18 13:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:21:30.723358292 +0000 UTC m=+799.531575653" watchObservedRunningTime="2026-03-18 13:21:30.731743858 +0000 UTC m=+799.539961229" Mar 18 13:21:31.328491 master-0 kubenswrapper[7146]: I0318 13:21:31.328427 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:31.328491 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:31.328491 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:31.328491 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:31.329247 master-0 kubenswrapper[7146]: I0318 13:21:31.328499 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:31.362224 master-0 kubenswrapper[7146]: I0318 13:21:31.362169 7146 scope.go:117] "RemoveContainer" containerID="3fe2b30d3a88bc253d1cf4b9fbf09e7d5bc69a80e3d0a14ba44ecbd5f6425a1e" Mar 18 13:21:31.362468 master-0 kubenswrapper[7146]: E0318 13:21:31.362442 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-7w5g8_openshift-machine-api(a01c92f5-7938-437d-8262-11598bd8023c)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" podUID="a01c92f5-7938-437d-8262-11598bd8023c" Mar 18 13:21:31.559397 master-0 kubenswrapper[7146]: I0318 13:21:31.559355 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:21:32.327655 master-0 kubenswrapper[7146]: I0318 13:21:32.327583 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:32.327655 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:32.327655 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:32.327655 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:32.328068 master-0 kubenswrapper[7146]: I0318 13:21:32.327663 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:33.328292 master-0 kubenswrapper[7146]: I0318 13:21:33.328159 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:33.328292 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:33.328292 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:33.328292 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:33.328292 master-0 kubenswrapper[7146]: I0318 13:21:33.328251 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:34.327976 master-0 kubenswrapper[7146]: I0318 13:21:34.327881 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:34.327976 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:34.327976 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:34.327976 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:34.327976 master-0 kubenswrapper[7146]: I0318 13:21:34.327969 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:34.358726 master-0 kubenswrapper[7146]: I0318 13:21:34.358665 7146 scope.go:117] "RemoveContainer" containerID="5b4c84f643c308b4da498c0b191698ddaa3218b818a04462557cc1d1c093013c" Mar 18 13:21:34.359142 master-0 kubenswrapper[7146]: E0318 13:21:34.358920 7146 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-xwqsb_openshift-ingress-operator(f2b92a53-0b61-4e1d-a306-f9a498e48b38)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" podUID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" Mar 18 13:21:34.462981 master-0 kubenswrapper[7146]: I0318 13:21:34.462747 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:21:35.328095 master-0 kubenswrapper[7146]: I0318 13:21:35.327974 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:35.328095 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:35.328095 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:35.328095 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:35.328395 master-0 kubenswrapper[7146]: I0318 13:21:35.328137 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:35.845754 master-0 kubenswrapper[7146]: I0318 13:21:35.845680 7146 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 13:21:35.846346 master-0 kubenswrapper[7146]: I0318 13:21:35.845922 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" containerID="cri-o://311fa0a837fab2a478663d760de17d2a8ddc702068f88e4f3d424a59411456ff" gracePeriod=30 Mar 18 13:21:35.846795 master-0 kubenswrapper[7146]: I0318 13:21:35.846692 7146 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 13:21:35.847079 master-0 kubenswrapper[7146]: E0318 13:21:35.847033 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 13:21:35.847079 master-0 kubenswrapper[7146]: I0318 13:21:35.847059 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 13:21:35.847079 master-0 kubenswrapper[7146]: E0318 13:21:35.847078 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 13:21:35.847242 master-0 kubenswrapper[7146]: I0318 13:21:35.847086 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 13:21:35.847287 master-0 kubenswrapper[7146]: I0318 13:21:35.847250 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 13:21:35.847287 master-0 kubenswrapper[7146]: I0318 13:21:35.847273 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 13:21:35.847287 master-0 kubenswrapper[7146]: I0318 13:21:35.847284 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 13:21:35.847464 master-0 kubenswrapper[7146]: E0318 13:21:35.847423 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 13:21:35.847464 master-0 kubenswrapper[7146]: I0318 13:21:35.847445 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 13:21:35.848751 master-0 kubenswrapper[7146]: I0318 13:21:35.848718 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:21:35.881877 master-0 kubenswrapper[7146]: I0318 13:21:35.881815 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 13:21:35.883577 master-0 kubenswrapper[7146]: I0318 13:21:35.883236 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:21:35.883577 master-0 kubenswrapper[7146]: I0318 13:21:35.883291 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:21:35.985410 master-0 kubenswrapper[7146]: I0318 13:21:35.985328 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:21:35.985410 master-0 kubenswrapper[7146]: I0318 13:21:35.985425 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:21:35.985833 master-0 kubenswrapper[7146]: I0318 13:21:35.985522 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:21:35.985833 master-0 kubenswrapper[7146]: I0318 13:21:35.985632 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:21:36.007311 master-0 kubenswrapper[7146]: I0318 13:21:36.007230 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:21:36.045191 master-0 kubenswrapper[7146]: I0318 13:21:36.045116 7146 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="44d36162-9bea-4fa3-93df-647b4192794b" Mar 18 13:21:36.087057 master-0 kubenswrapper[7146]: I0318 13:21:36.086929 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 18 13:21:36.087057 master-0 kubenswrapper[7146]: I0318 13:21:36.087032 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets" (OuterVolumeSpecName: "secrets") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:21:36.087542 master-0 kubenswrapper[7146]: I0318 13:21:36.087247 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 18 13:21:36.087542 master-0 kubenswrapper[7146]: I0318 13:21:36.087279 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs" (OuterVolumeSpecName: "logs") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:21:36.087609 master-0 kubenswrapper[7146]: I0318 13:21:36.087582 7146 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:21:36.087643 master-0 kubenswrapper[7146]: I0318 13:21:36.087604 7146 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") on node \"master-0\" DevicePath \"\"" Mar 18 13:21:36.177629 master-0 kubenswrapper[7146]: I0318 13:21:36.177544 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:21:36.205568 master-0 kubenswrapper[7146]: W0318 13:21:36.205494 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e27b7d086edf5d2cf47b703574641d8.slice/crio-9cdb7659f9e5befc4b423f8f01e97091301553ed5776dec5e04ebf95f793c39d WatchSource:0}: Error finding container 9cdb7659f9e5befc4b423f8f01e97091301553ed5776dec5e04ebf95f793c39d: Status 404 returned error can't find the container with id 9cdb7659f9e5befc4b423f8f01e97091301553ed5776dec5e04ebf95f793c39d Mar 18 13:21:36.328264 master-0 kubenswrapper[7146]: I0318 13:21:36.328214 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:36.328264 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:36.328264 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:36.328264 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:36.328615 master-0 kubenswrapper[7146]: I0318 13:21:36.328288 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:36.667851 master-0 kubenswrapper[7146]: I0318 13:21:36.667674 7146 generic.go:334] "Generic (PLEG): container finished" podID="8e27b7d086edf5d2cf47b703574641d8" containerID="6d83f8447a991a30e2932285a9ad9391e4be4f81c9b4bec0c838fb37dccbbcda" exitCode=0 Mar 18 13:21:36.667851 master-0 kubenswrapper[7146]: I0318 13:21:36.667804 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerDied","Data":"6d83f8447a991a30e2932285a9ad9391e4be4f81c9b4bec0c838fb37dccbbcda"} Mar 18 13:21:36.667851 master-0 kubenswrapper[7146]: I0318 13:21:36.667858 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"9cdb7659f9e5befc4b423f8f01e97091301553ed5776dec5e04ebf95f793c39d"} Mar 18 13:21:36.670199 master-0 kubenswrapper[7146]: I0318 13:21:36.669752 7146 generic.go:334] "Generic (PLEG): container finished" podID="2669bc40-9271-4494-9e21-290cd4383b05" containerID="da68cebc5e87d23d463a0c9379a0a5014fb73cbd24809cddd09f3686c920cb75" exitCode=0 Mar 18 13:21:36.670199 master-0 kubenswrapper[7146]: I0318 13:21:36.669879 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"2669bc40-9271-4494-9e21-290cd4383b05","Type":"ContainerDied","Data":"da68cebc5e87d23d463a0c9379a0a5014fb73cbd24809cddd09f3686c920cb75"} Mar 18 13:21:36.684995 master-0 kubenswrapper[7146]: I0318 13:21:36.684848 7146 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="311fa0a837fab2a478663d760de17d2a8ddc702068f88e4f3d424a59411456ff" exitCode=0 Mar 18 13:21:36.684995 master-0 kubenswrapper[7146]: I0318 13:21:36.684983 7146 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e0ad9e4d46022da9225ef1364382c88cb4b32388cd7035e1c00337bf6332812" Mar 18 13:21:36.684995 master-0 kubenswrapper[7146]: I0318 13:21:36.684985 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:21:36.684995 master-0 kubenswrapper[7146]: I0318 13:21:36.685007 7146 scope.go:117] "RemoveContainer" containerID="fa1d385ac095a8d1dc31f1e6dbbfd78274773bc8abd30fc3ee99e963ef88d538" Mar 18 13:21:37.327666 master-0 kubenswrapper[7146]: I0318 13:21:37.327574 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:37.327666 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:37.327666 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:37.327666 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:37.328820 master-0 kubenswrapper[7146]: I0318 13:21:37.327668 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:37.367574 master-0 kubenswrapper[7146]: I0318 13:21:37.367507 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83737980b9ee109184b1d78e942cf36" path="/var/lib/kubelet/pods/c83737980b9ee109184b1d78e942cf36/volumes" Mar 18 13:21:37.367790 master-0 kubenswrapper[7146]: I0318 13:21:37.367774 7146 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 18 13:21:37.383247 master-0 kubenswrapper[7146]: I0318 13:21:37.383191 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 13:21:37.383247 master-0 kubenswrapper[7146]: I0318 13:21:37.383237 7146 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="44d36162-9bea-4fa3-93df-647b4192794b" Mar 18 13:21:37.387639 master-0 kubenswrapper[7146]: I0318 13:21:37.387557 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 13:21:37.387639 master-0 kubenswrapper[7146]: I0318 13:21:37.387598 7146 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="44d36162-9bea-4fa3-93df-647b4192794b" Mar 18 13:21:37.697637 master-0 kubenswrapper[7146]: I0318 13:21:37.697548 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"b6b74e1434af586928d0d97de2097dc7d5af0debabf7fc72ae9441fd8215f19c"} Mar 18 13:21:37.697848 master-0 kubenswrapper[7146]: I0318 13:21:37.697744 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"476397230dd5265b109c88cb9895dcb2331c878aa0e952499f1e99bacdfb7c70"} Mar 18 13:21:37.697848 master-0 kubenswrapper[7146]: I0318 13:21:37.697781 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"f24e2a620b5b7fcf0061b5eed63562935874a9516125dafe2e71d357a479bb90"} Mar 18 13:21:37.698056 master-0 kubenswrapper[7146]: I0318 13:21:37.698017 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:21:37.717019 master-0 kubenswrapper[7146]: I0318 13:21:37.716806 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.71677519 podStartE2EDuration="2.71677519s" podCreationTimestamp="2026-03-18 13:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:21:37.715377011 +0000 UTC m=+806.523594382" watchObservedRunningTime="2026-03-18 13:21:37.71677519 +0000 UTC m=+806.524992551" Mar 18 13:21:37.963282 master-0 kubenswrapper[7146]: I0318 13:21:37.963154 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 18 13:21:38.012049 master-0 kubenswrapper[7146]: I0318 13:21:38.011961 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2669bc40-9271-4494-9e21-290cd4383b05-kubelet-dir\") pod \"2669bc40-9271-4494-9e21-290cd4383b05\" (UID: \"2669bc40-9271-4494-9e21-290cd4383b05\") " Mar 18 13:21:38.012049 master-0 kubenswrapper[7146]: I0318 13:21:38.012031 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2669bc40-9271-4494-9e21-290cd4383b05-kube-api-access\") pod \"2669bc40-9271-4494-9e21-290cd4383b05\" (UID: \"2669bc40-9271-4494-9e21-290cd4383b05\") " Mar 18 13:21:38.012335 master-0 kubenswrapper[7146]: I0318 13:21:38.012083 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2669bc40-9271-4494-9e21-290cd4383b05-var-lock\") pod \"2669bc40-9271-4494-9e21-290cd4383b05\" (UID: \"2669bc40-9271-4494-9e21-290cd4383b05\") " Mar 18 13:21:38.012335 master-0 kubenswrapper[7146]: I0318 13:21:38.012110 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2669bc40-9271-4494-9e21-290cd4383b05-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2669bc40-9271-4494-9e21-290cd4383b05" (UID: "2669bc40-9271-4494-9e21-290cd4383b05"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:21:38.012427 master-0 kubenswrapper[7146]: I0318 13:21:38.012310 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2669bc40-9271-4494-9e21-290cd4383b05-var-lock" (OuterVolumeSpecName: "var-lock") pod "2669bc40-9271-4494-9e21-290cd4383b05" (UID: "2669bc40-9271-4494-9e21-290cd4383b05"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:21:38.012618 master-0 kubenswrapper[7146]: I0318 13:21:38.012586 7146 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2669bc40-9271-4494-9e21-290cd4383b05-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:21:38.012618 master-0 kubenswrapper[7146]: I0318 13:21:38.012612 7146 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2669bc40-9271-4494-9e21-290cd4383b05-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:21:38.021045 master-0 kubenswrapper[7146]: I0318 13:21:38.019620 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2669bc40-9271-4494-9e21-290cd4383b05-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2669bc40-9271-4494-9e21-290cd4383b05" (UID: "2669bc40-9271-4494-9e21-290cd4383b05"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:21:38.113443 master-0 kubenswrapper[7146]: I0318 13:21:38.113358 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2669bc40-9271-4494-9e21-290cd4383b05-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:21:38.328124 master-0 kubenswrapper[7146]: I0318 13:21:38.327980 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:38.328124 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:38.328124 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:38.328124 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:38.328124 master-0 kubenswrapper[7146]: I0318 13:21:38.328057 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:38.705774 master-0 kubenswrapper[7146]: I0318 13:21:38.705709 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"2669bc40-9271-4494-9e21-290cd4383b05","Type":"ContainerDied","Data":"a577631cf83d4d696a51ef5800c1380f23cc2dfd5a5c79567b96e2414f25b3b1"} Mar 18 13:21:38.705774 master-0 kubenswrapper[7146]: I0318 13:21:38.705763 7146 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a577631cf83d4d696a51ef5800c1380f23cc2dfd5a5c79567b96e2414f25b3b1" Mar 18 13:21:38.706057 master-0 kubenswrapper[7146]: I0318 13:21:38.705999 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 18 13:21:39.328772 master-0 kubenswrapper[7146]: I0318 13:21:39.328720 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:39.328772 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:39.328772 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:39.328772 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:39.329358 master-0 kubenswrapper[7146]: I0318 13:21:39.328792 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:40.327824 master-0 kubenswrapper[7146]: I0318 13:21:40.327676 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:40.327824 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:40.327824 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:40.327824 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:40.327824 master-0 kubenswrapper[7146]: I0318 13:21:40.327782 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:41.327880 master-0 kubenswrapper[7146]: I0318 13:21:41.327809 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:41.327880 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:41.327880 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:41.327880 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:41.328564 master-0 kubenswrapper[7146]: I0318 13:21:41.327930 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:42.328210 master-0 kubenswrapper[7146]: I0318 13:21:42.328158 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:42.328210 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:42.328210 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:42.328210 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:42.328804 master-0 kubenswrapper[7146]: I0318 13:21:42.328215 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:43.328273 master-0 kubenswrapper[7146]: I0318 13:21:43.328218 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:43.328273 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:43.328273 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:43.328273 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:43.329015 master-0 kubenswrapper[7146]: I0318 13:21:43.328284 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:44.328089 master-0 kubenswrapper[7146]: I0318 13:21:44.328019 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:44.328089 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:44.328089 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:44.328089 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:44.328794 master-0 kubenswrapper[7146]: I0318 13:21:44.328114 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:45.327684 master-0 kubenswrapper[7146]: I0318 13:21:45.327607 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:45.327684 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:45.327684 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:45.327684 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:45.328095 master-0 kubenswrapper[7146]: I0318 13:21:45.327707 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:46.327520 master-0 kubenswrapper[7146]: I0318 13:21:46.327420 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:46.327520 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:46.327520 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:46.327520 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:46.328232 master-0 kubenswrapper[7146]: I0318 13:21:46.327529 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:46.357830 master-0 kubenswrapper[7146]: I0318 13:21:46.357778 7146 scope.go:117] "RemoveContainer" containerID="3fe2b30d3a88bc253d1cf4b9fbf09e7d5bc69a80e3d0a14ba44ecbd5f6425a1e" Mar 18 13:21:46.756225 master-0 kubenswrapper[7146]: I0318 13:21:46.756176 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-7w5g8_a01c92f5-7938-437d-8262-11598bd8023c/cluster-baremetal-operator/2.log" Mar 18 13:21:46.756800 master-0 kubenswrapper[7146]: I0318 13:21:46.756756 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" event={"ID":"a01c92f5-7938-437d-8262-11598bd8023c","Type":"ContainerStarted","Data":"b07bc7a3b62abb518f7f1c159d82a93fb9f1cf24bf7c86108540d06340ff8092"} Mar 18 13:21:47.327869 master-0 kubenswrapper[7146]: I0318 13:21:47.327820 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:47.327869 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:47.327869 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:47.327869 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:47.328582 master-0 kubenswrapper[7146]: I0318 13:21:47.327893 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:47.358053 master-0 kubenswrapper[7146]: I0318 13:21:47.358005 7146 scope.go:117] "RemoveContainer" containerID="5b4c84f643c308b4da498c0b191698ddaa3218b818a04462557cc1d1c093013c" Mar 18 13:21:47.764654 master-0 kubenswrapper[7146]: I0318 13:21:47.764604 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/4.log" Mar 18 13:21:47.765040 master-0 kubenswrapper[7146]: I0318 13:21:47.764992 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" event={"ID":"f2b92a53-0b61-4e1d-a306-f9a498e48b38","Type":"ContainerStarted","Data":"0f1b7521916bb1f15f4a8946c701639d4de35a4fc8e0cbdc319661e84db6acb6"} Mar 18 13:21:48.143531 master-0 kubenswrapper[7146]: I0318 13:21:48.143405 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt"] Mar 18 13:21:48.143725 master-0 kubenswrapper[7146]: E0318 13:21:48.143712 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2669bc40-9271-4494-9e21-290cd4383b05" containerName="installer" Mar 18 13:21:48.143783 master-0 kubenswrapper[7146]: I0318 13:21:48.143727 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="2669bc40-9271-4494-9e21-290cd4383b05" containerName="installer" Mar 18 13:21:48.143910 master-0 kubenswrapper[7146]: I0318 13:21:48.143882 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="2669bc40-9271-4494-9e21-290cd4383b05" containerName="installer" Mar 18 13:21:48.144648 master-0 kubenswrapper[7146]: I0318 13:21:48.144620 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" Mar 18 13:21:48.147397 master-0 kubenswrapper[7146]: I0318 13:21:48.147366 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-29gmv" Mar 18 13:21:48.173928 master-0 kubenswrapper[7146]: I0318 13:21:48.173868 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt"] Mar 18 13:21:48.256454 master-0 kubenswrapper[7146]: I0318 13:21:48.256396 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-bnrjt\" (UID: \"bc9af4af-fb39-4a51-83ae-dab3f1d159f2\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" Mar 18 13:21:48.256669 master-0 kubenswrapper[7146]: I0318 13:21:48.256477 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twczm\" (UniqueName: \"kubernetes.io/projected/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-kube-api-access-twczm\") pod \"multus-admission-controller-58c9f8fc64-bnrjt\" (UID: \"bc9af4af-fb39-4a51-83ae-dab3f1d159f2\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" Mar 18 13:21:48.327459 master-0 kubenswrapper[7146]: I0318 13:21:48.327394 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:48.327459 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:48.327459 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:48.327459 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:48.327812 master-0 kubenswrapper[7146]: I0318 13:21:48.327480 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:48.357632 master-0 kubenswrapper[7146]: I0318 13:21:48.357553 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twczm\" (UniqueName: \"kubernetes.io/projected/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-kube-api-access-twczm\") pod \"multus-admission-controller-58c9f8fc64-bnrjt\" (UID: \"bc9af4af-fb39-4a51-83ae-dab3f1d159f2\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" Mar 18 13:21:48.358170 master-0 kubenswrapper[7146]: I0318 13:21:48.357715 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-bnrjt\" (UID: \"bc9af4af-fb39-4a51-83ae-dab3f1d159f2\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" Mar 18 13:21:48.360747 master-0 kubenswrapper[7146]: I0318 13:21:48.360657 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-bnrjt\" (UID: \"bc9af4af-fb39-4a51-83ae-dab3f1d159f2\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" Mar 18 13:21:48.375310 master-0 kubenswrapper[7146]: I0318 13:21:48.375247 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twczm\" (UniqueName: \"kubernetes.io/projected/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-kube-api-access-twczm\") pod \"multus-admission-controller-58c9f8fc64-bnrjt\" (UID: \"bc9af4af-fb39-4a51-83ae-dab3f1d159f2\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" Mar 18 13:21:48.466157 master-0 kubenswrapper[7146]: I0318 13:21:48.466093 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" Mar 18 13:21:48.942203 master-0 kubenswrapper[7146]: W0318 13:21:48.942135 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc9af4af_fb39_4a51_83ae_dab3f1d159f2.slice/crio-0e5aebf642fb9f996565bf333412adf5ef6e32356850ce107ed2ae531c959857 WatchSource:0}: Error finding container 0e5aebf642fb9f996565bf333412adf5ef6e32356850ce107ed2ae531c959857: Status 404 returned error can't find the container with id 0e5aebf642fb9f996565bf333412adf5ef6e32356850ce107ed2ae531c959857 Mar 18 13:21:48.942583 master-0 kubenswrapper[7146]: I0318 13:21:48.942550 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt"] Mar 18 13:21:49.328415 master-0 kubenswrapper[7146]: I0318 13:21:49.328350 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:49.328415 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:49.328415 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:49.328415 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:49.328749 master-0 kubenswrapper[7146]: I0318 13:21:49.328419 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:49.779451 master-0 kubenswrapper[7146]: I0318 13:21:49.779328 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" event={"ID":"bc9af4af-fb39-4a51-83ae-dab3f1d159f2","Type":"ContainerStarted","Data":"3bd34bf3cd8fe77afc558bfb332a7e01fbc49b45aa6b2f85efee0309ca5da2d2"} Mar 18 13:21:49.779451 master-0 kubenswrapper[7146]: I0318 13:21:49.779393 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" event={"ID":"bc9af4af-fb39-4a51-83ae-dab3f1d159f2","Type":"ContainerStarted","Data":"df934e19e232bc6fb6816ff4ba49b4beb8d1b8d44a4ad478603df18f92e7c121"} Mar 18 13:21:49.779451 master-0 kubenswrapper[7146]: I0318 13:21:49.779425 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" event={"ID":"bc9af4af-fb39-4a51-83ae-dab3f1d159f2","Type":"ContainerStarted","Data":"0e5aebf642fb9f996565bf333412adf5ef6e32356850ce107ed2ae531c959857"} Mar 18 13:21:49.807800 master-0 kubenswrapper[7146]: I0318 13:21:49.807683 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" podStartSLOduration=1.807659297 podStartE2EDuration="1.807659297s" podCreationTimestamp="2026-03-18 13:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:21:49.799890348 +0000 UTC m=+818.608107759" watchObservedRunningTime="2026-03-18 13:21:49.807659297 +0000 UTC m=+818.615876668" Mar 18 13:21:49.863028 master-0 kubenswrapper[7146]: I0318 13:21:49.862898 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb"] Mar 18 13:21:49.863377 master-0 kubenswrapper[7146]: I0318 13:21:49.863298 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" podUID="906c0fd3-3bcd-4c6c-8505-b3517bae06b4" containerName="multus-admission-controller" containerID="cri-o://edd40514a1f5f31c013c470064966c977a9ede25c673b02694bc6dccf5bde6b4" gracePeriod=30 Mar 18 13:21:49.863470 master-0 kubenswrapper[7146]: I0318 13:21:49.863408 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" podUID="906c0fd3-3bcd-4c6c-8505-b3517bae06b4" containerName="kube-rbac-proxy" containerID="cri-o://cca16efb9d54bc951cd9ba818f02d1594b6f1d22556ab9b15b457bd617b1b96c" gracePeriod=30 Mar 18 13:21:50.328063 master-0 kubenswrapper[7146]: I0318 13:21:50.327927 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:50.328063 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:50.328063 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:50.328063 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:50.328361 master-0 kubenswrapper[7146]: I0318 13:21:50.328074 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:50.787219 master-0 kubenswrapper[7146]: I0318 13:21:50.787175 7146 generic.go:334] "Generic (PLEG): container finished" podID="906c0fd3-3bcd-4c6c-8505-b3517bae06b4" containerID="cca16efb9d54bc951cd9ba818f02d1594b6f1d22556ab9b15b457bd617b1b96c" exitCode=0 Mar 18 13:21:50.787726 master-0 kubenswrapper[7146]: I0318 13:21:50.787242 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" event={"ID":"906c0fd3-3bcd-4c6c-8505-b3517bae06b4","Type":"ContainerDied","Data":"cca16efb9d54bc951cd9ba818f02d1594b6f1d22556ab9b15b457bd617b1b96c"} Mar 18 13:21:51.328417 master-0 kubenswrapper[7146]: I0318 13:21:51.328364 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:51.328417 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:51.328417 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:51.328417 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:51.328756 master-0 kubenswrapper[7146]: I0318 13:21:51.328429 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:52.328033 master-0 kubenswrapper[7146]: I0318 13:21:52.327919 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:52.328033 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:52.328033 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:52.328033 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:52.328592 master-0 kubenswrapper[7146]: I0318 13:21:52.328041 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:53.327681 master-0 kubenswrapper[7146]: I0318 13:21:53.327564 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:53.327681 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:53.327681 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:53.327681 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:53.327681 master-0 kubenswrapper[7146]: I0318 13:21:53.327642 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:54.327795 master-0 kubenswrapper[7146]: I0318 13:21:54.327678 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:54.327795 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:54.327795 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:54.327795 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:54.328758 master-0 kubenswrapper[7146]: I0318 13:21:54.327830 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:55.327519 master-0 kubenswrapper[7146]: I0318 13:21:55.327452 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:55.327519 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:55.327519 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:55.327519 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:55.327809 master-0 kubenswrapper[7146]: I0318 13:21:55.327528 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:56.328351 master-0 kubenswrapper[7146]: I0318 13:21:56.328256 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:56.328351 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:56.328351 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:56.328351 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:56.329265 master-0 kubenswrapper[7146]: I0318 13:21:56.328374 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:57.328290 master-0 kubenswrapper[7146]: I0318 13:21:57.328226 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:57.328290 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:57.328290 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:57.328290 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:57.328874 master-0 kubenswrapper[7146]: I0318 13:21:57.328313 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:58.333971 master-0 kubenswrapper[7146]: I0318 13:21:58.333142 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:58.333971 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:58.333971 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:58.333971 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:58.333971 master-0 kubenswrapper[7146]: I0318 13:21:58.333212 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:59.326960 master-0 kubenswrapper[7146]: I0318 13:21:59.326894 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:59.326960 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:21:59.326960 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:21:59.326960 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:21:59.327254 master-0 kubenswrapper[7146]: I0318 13:21:59.326978 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:00.327914 master-0 kubenswrapper[7146]: I0318 13:22:00.327866 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:00.327914 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:22:00.327914 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:22:00.327914 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:22:00.328773 master-0 kubenswrapper[7146]: I0318 13:22:00.327924 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:01.327893 master-0 kubenswrapper[7146]: I0318 13:22:01.327854 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:01.327893 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:22:01.327893 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:22:01.327893 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:22:01.328557 master-0 kubenswrapper[7146]: I0318 13:22:01.328534 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:02.327913 master-0 kubenswrapper[7146]: I0318 13:22:02.327866 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:02.327913 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:22:02.327913 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:22:02.327913 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:22:02.329012 master-0 kubenswrapper[7146]: I0318 13:22:02.327983 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:03.123912 master-0 kubenswrapper[7146]: I0318 13:22:03.123844 7146 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:22:03.124250 master-0 kubenswrapper[7146]: I0318 13:22:03.124203 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731" gracePeriod=30 Mar 18 13:22:03.124354 master-0 kubenswrapper[7146]: I0318 13:22:03.124262 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" containerID="cri-o://88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930" gracePeriod=30 Mar 18 13:22:03.124354 master-0 kubenswrapper[7146]: I0318 13:22:03.124289 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253" gracePeriod=30 Mar 18 13:22:03.124501 master-0 kubenswrapper[7146]: I0318 13:22:03.124334 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager" containerID="cri-o://532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29" gracePeriod=30 Mar 18 13:22:03.125420 master-0 kubenswrapper[7146]: I0318 13:22:03.124902 7146 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:22:03.125420 master-0 kubenswrapper[7146]: E0318 13:22:03.125254 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager" Mar 18 13:22:03.125420 master-0 kubenswrapper[7146]: I0318 13:22:03.125276 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager" Mar 18 13:22:03.125420 master-0 kubenswrapper[7146]: E0318 13:22:03.125302 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" Mar 18 13:22:03.125420 master-0 kubenswrapper[7146]: I0318 13:22:03.125312 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" Mar 18 13:22:03.125420 master-0 kubenswrapper[7146]: E0318 13:22:03.125343 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" Mar 18 13:22:03.125420 master-0 kubenswrapper[7146]: I0318 13:22:03.125354 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" Mar 18 13:22:03.125420 master-0 kubenswrapper[7146]: E0318 13:22:03.125373 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" Mar 18 13:22:03.125420 master-0 kubenswrapper[7146]: I0318 13:22:03.125385 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" Mar 18 13:22:03.125420 master-0 kubenswrapper[7146]: E0318 13:22:03.125399 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" Mar 18 13:22:03.125420 master-0 kubenswrapper[7146]: I0318 13:22:03.125410 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" Mar 18 13:22:03.126108 master-0 kubenswrapper[7146]: E0318 13:22:03.125432 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager-cert-syncer" Mar 18 13:22:03.126108 master-0 kubenswrapper[7146]: I0318 13:22:03.125445 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager-cert-syncer" Mar 18 13:22:03.126108 master-0 kubenswrapper[7146]: E0318 13:22:03.125468 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager-recovery-controller" Mar 18 13:22:03.126108 master-0 kubenswrapper[7146]: I0318 13:22:03.125481 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager-recovery-controller" Mar 18 13:22:03.126108 master-0 kubenswrapper[7146]: E0318 13:22:03.125512 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager" Mar 18 13:22:03.126108 master-0 kubenswrapper[7146]: I0318 13:22:03.125524 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager" Mar 18 13:22:03.126108 master-0 kubenswrapper[7146]: I0318 13:22:03.125723 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager-cert-syncer" Mar 18 13:22:03.126108 master-0 kubenswrapper[7146]: I0318 13:22:03.125747 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager-recovery-controller" Mar 18 13:22:03.126108 master-0 kubenswrapper[7146]: I0318 13:22:03.125767 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager" Mar 18 13:22:03.126108 master-0 kubenswrapper[7146]: I0318 13:22:03.125785 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" Mar 18 13:22:03.126108 master-0 kubenswrapper[7146]: I0318 13:22:03.125803 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" Mar 18 13:22:03.126108 master-0 kubenswrapper[7146]: I0318 13:22:03.125820 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager-cert-syncer" Mar 18 13:22:03.126108 master-0 kubenswrapper[7146]: I0318 13:22:03.125836 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" Mar 18 13:22:03.126108 master-0 kubenswrapper[7146]: I0318 13:22:03.125852 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager" Mar 18 13:22:03.126108 master-0 kubenswrapper[7146]: I0318 13:22:03.125865 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" Mar 18 13:22:03.126108 master-0 kubenswrapper[7146]: E0318 13:22:03.126093 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" Mar 18 13:22:03.126108 master-0 kubenswrapper[7146]: I0318 13:22:03.126112 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" Mar 18 13:22:03.127243 master-0 kubenswrapper[7146]: E0318 13:22:03.126132 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager-cert-syncer" Mar 18 13:22:03.127243 master-0 kubenswrapper[7146]: I0318 13:22:03.126144 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="kube-controller-manager-cert-syncer" Mar 18 13:22:03.127243 master-0 kubenswrapper[7146]: I0318 13:22:03.126352 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88d0f62c0688ab1909dc97f30d381b9" containerName="cluster-policy-controller" Mar 18 13:22:03.182459 master-0 kubenswrapper[7146]: I0318 13:22:03.182405 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e47f97eb0a0cc5aac7e96e57325228c9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:03.182572 master-0 kubenswrapper[7146]: I0318 13:22:03.182531 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e47f97eb0a0cc5aac7e96e57325228c9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:03.283780 master-0 kubenswrapper[7146]: I0318 13:22:03.283733 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e47f97eb0a0cc5aac7e96e57325228c9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:03.283904 master-0 kubenswrapper[7146]: I0318 13:22:03.283782 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e47f97eb0a0cc5aac7e96e57325228c9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:03.283904 master-0 kubenswrapper[7146]: I0318 13:22:03.283877 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e47f97eb0a0cc5aac7e96e57325228c9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:03.284044 master-0 kubenswrapper[7146]: I0318 13:22:03.283922 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e47f97eb0a0cc5aac7e96e57325228c9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:03.328005 master-0 kubenswrapper[7146]: I0318 13:22:03.327965 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:03.328005 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:22:03.328005 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:22:03.328005 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:22:03.328694 master-0 kubenswrapper[7146]: I0318 13:22:03.328411 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:03.404789 master-0 kubenswrapper[7146]: I0318 13:22:03.404181 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager-cert-syncer/1.log" Mar 18 13:22:03.405512 master-0 kubenswrapper[7146]: I0318 13:22:03.405492 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/cluster-policy-controller/3.log" Mar 18 13:22:03.407024 master-0 kubenswrapper[7146]: I0318 13:22:03.406991 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager-cert-syncer/0.log" Mar 18 13:22:03.407478 master-0 kubenswrapper[7146]: I0318 13:22:03.407440 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager/0.log" Mar 18 13:22:03.407570 master-0 kubenswrapper[7146]: I0318 13:22:03.407539 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:03.410679 master-0 kubenswrapper[7146]: I0318 13:22:03.410564 7146 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="f88d0f62c0688ab1909dc97f30d381b9" podUID="e47f97eb0a0cc5aac7e96e57325228c9" Mar 18 13:22:03.486260 master-0 kubenswrapper[7146]: I0318 13:22:03.486193 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f88d0f62c0688ab1909dc97f30d381b9-resource-dir\") pod \"f88d0f62c0688ab1909dc97f30d381b9\" (UID: \"f88d0f62c0688ab1909dc97f30d381b9\") " Mar 18 13:22:03.486260 master-0 kubenswrapper[7146]: I0318 13:22:03.486269 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f88d0f62c0688ab1909dc97f30d381b9-cert-dir\") pod \"f88d0f62c0688ab1909dc97f30d381b9\" (UID: \"f88d0f62c0688ab1909dc97f30d381b9\") " Mar 18 13:22:03.486486 master-0 kubenswrapper[7146]: I0318 13:22:03.486315 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f88d0f62c0688ab1909dc97f30d381b9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f88d0f62c0688ab1909dc97f30d381b9" (UID: "f88d0f62c0688ab1909dc97f30d381b9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:22:03.486486 master-0 kubenswrapper[7146]: I0318 13:22:03.486419 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f88d0f62c0688ab1909dc97f30d381b9-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f88d0f62c0688ab1909dc97f30d381b9" (UID: "f88d0f62c0688ab1909dc97f30d381b9"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:22:03.486661 master-0 kubenswrapper[7146]: I0318 13:22:03.486630 7146 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f88d0f62c0688ab1909dc97f30d381b9-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:03.486661 master-0 kubenswrapper[7146]: I0318 13:22:03.486654 7146 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f88d0f62c0688ab1909dc97f30d381b9-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:03.875072 master-0 kubenswrapper[7146]: I0318 13:22:03.875015 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager-cert-syncer/1.log" Mar 18 13:22:03.875645 master-0 kubenswrapper[7146]: I0318 13:22:03.875620 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/cluster-policy-controller/3.log" Mar 18 13:22:03.876968 master-0 kubenswrapper[7146]: I0318 13:22:03.876921 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager-cert-syncer/0.log" Mar 18 13:22:03.877581 master-0 kubenswrapper[7146]: I0318 13:22:03.877557 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f88d0f62c0688ab1909dc97f30d381b9/kube-controller-manager/0.log" Mar 18 13:22:03.877684 master-0 kubenswrapper[7146]: I0318 13:22:03.877603 7146 generic.go:334] "Generic (PLEG): container finished" podID="f88d0f62c0688ab1909dc97f30d381b9" containerID="88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930" exitCode=0 Mar 18 13:22:03.877684 master-0 kubenswrapper[7146]: I0318 13:22:03.877620 7146 generic.go:334] "Generic (PLEG): container finished" podID="f88d0f62c0688ab1909dc97f30d381b9" containerID="2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253" exitCode=2 Mar 18 13:22:03.877684 master-0 kubenswrapper[7146]: I0318 13:22:03.877628 7146 generic.go:334] "Generic (PLEG): container finished" podID="f88d0f62c0688ab1909dc97f30d381b9" containerID="532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29" exitCode=0 Mar 18 13:22:03.877684 master-0 kubenswrapper[7146]: I0318 13:22:03.877635 7146 generic.go:334] "Generic (PLEG): container finished" podID="f88d0f62c0688ab1909dc97f30d381b9" containerID="eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731" exitCode=0 Mar 18 13:22:03.878032 master-0 kubenswrapper[7146]: I0318 13:22:03.877699 7146 scope.go:117] "RemoveContainer" containerID="88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930" Mar 18 13:22:03.878032 master-0 kubenswrapper[7146]: I0318 13:22:03.877740 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:03.879215 master-0 kubenswrapper[7146]: I0318 13:22:03.879191 7146 generic.go:334] "Generic (PLEG): container finished" podID="89d262b4-b1a7-49b8-a8d2-1bb1ea671df8" containerID="56a1ebe6b0097c7f125f082d76f61cc3fac21860bdfba2e3c6f543dc04756bf5" exitCode=0 Mar 18 13:22:03.879268 master-0 kubenswrapper[7146]: I0318 13:22:03.879224 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8","Type":"ContainerDied","Data":"56a1ebe6b0097c7f125f082d76f61cc3fac21860bdfba2e3c6f543dc04756bf5"} Mar 18 13:22:03.894141 master-0 kubenswrapper[7146]: I0318 13:22:03.894083 7146 scope.go:117] "RemoveContainer" containerID="2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253" Mar 18 13:22:03.900551 master-0 kubenswrapper[7146]: I0318 13:22:03.900482 7146 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="f88d0f62c0688ab1909dc97f30d381b9" podUID="e47f97eb0a0cc5aac7e96e57325228c9" Mar 18 13:22:03.909950 master-0 kubenswrapper[7146]: I0318 13:22:03.909879 7146 scope.go:117] "RemoveContainer" containerID="5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b" Mar 18 13:22:03.932582 master-0 kubenswrapper[7146]: I0318 13:22:03.932547 7146 scope.go:117] "RemoveContainer" containerID="532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29" Mar 18 13:22:03.947012 master-0 kubenswrapper[7146]: I0318 13:22:03.946976 7146 scope.go:117] "RemoveContainer" containerID="eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731" Mar 18 13:22:03.969905 master-0 kubenswrapper[7146]: I0318 13:22:03.969851 7146 scope.go:117] "RemoveContainer" containerID="9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5" Mar 18 13:22:03.989102 master-0 kubenswrapper[7146]: I0318 13:22:03.989056 7146 scope.go:117] "RemoveContainer" containerID="af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1" Mar 18 13:22:04.000852 master-0 kubenswrapper[7146]: I0318 13:22:04.000798 7146 scope.go:117] "RemoveContainer" containerID="88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930" Mar 18 13:22:04.001222 master-0 kubenswrapper[7146]: E0318 13:22:04.001175 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930\": container with ID starting with 88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930 not found: ID does not exist" containerID="88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930" Mar 18 13:22:04.001288 master-0 kubenswrapper[7146]: I0318 13:22:04.001220 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930"} err="failed to get container status \"88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930\": rpc error: code = NotFound desc = could not find container \"88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930\": container with ID starting with 88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930 not found: ID does not exist" Mar 18 13:22:04.001288 master-0 kubenswrapper[7146]: I0318 13:22:04.001241 7146 scope.go:117] "RemoveContainer" containerID="2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253" Mar 18 13:22:04.001589 master-0 kubenswrapper[7146]: E0318 13:22:04.001546 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253\": container with ID starting with 2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253 not found: ID does not exist" containerID="2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253" Mar 18 13:22:04.001589 master-0 kubenswrapper[7146]: I0318 13:22:04.001577 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253"} err="failed to get container status \"2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253\": rpc error: code = NotFound desc = could not find container \"2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253\": container with ID starting with 2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253 not found: ID does not exist" Mar 18 13:22:04.001691 master-0 kubenswrapper[7146]: I0318 13:22:04.001595 7146 scope.go:117] "RemoveContainer" containerID="5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b" Mar 18 13:22:04.001915 master-0 kubenswrapper[7146]: E0318 13:22:04.001875 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b\": container with ID starting with 5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b not found: ID does not exist" containerID="5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b" Mar 18 13:22:04.001997 master-0 kubenswrapper[7146]: I0318 13:22:04.001919 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b"} err="failed to get container status \"5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b\": rpc error: code = NotFound desc = could not find container \"5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b\": container with ID starting with 5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b not found: ID does not exist" Mar 18 13:22:04.002046 master-0 kubenswrapper[7146]: I0318 13:22:04.001999 7146 scope.go:117] "RemoveContainer" containerID="532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29" Mar 18 13:22:04.002317 master-0 kubenswrapper[7146]: E0318 13:22:04.002278 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29\": container with ID starting with 532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29 not found: ID does not exist" containerID="532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29" Mar 18 13:22:04.002317 master-0 kubenswrapper[7146]: I0318 13:22:04.002310 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29"} err="failed to get container status \"532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29\": rpc error: code = NotFound desc = could not find container \"532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29\": container with ID starting with 532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29 not found: ID does not exist" Mar 18 13:22:04.002421 master-0 kubenswrapper[7146]: I0318 13:22:04.002325 7146 scope.go:117] "RemoveContainer" containerID="eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731" Mar 18 13:22:04.002589 master-0 kubenswrapper[7146]: E0318 13:22:04.002554 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731\": container with ID starting with eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731 not found: ID does not exist" containerID="eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731" Mar 18 13:22:04.002589 master-0 kubenswrapper[7146]: I0318 13:22:04.002581 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731"} err="failed to get container status \"eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731\": rpc error: code = NotFound desc = could not find container \"eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731\": container with ID starting with eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731 not found: ID does not exist" Mar 18 13:22:04.002695 master-0 kubenswrapper[7146]: I0318 13:22:04.002598 7146 scope.go:117] "RemoveContainer" containerID="9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5" Mar 18 13:22:04.002909 master-0 kubenswrapper[7146]: E0318 13:22:04.002855 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5\": container with ID starting with 9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5 not found: ID does not exist" containerID="9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5" Mar 18 13:22:04.002909 master-0 kubenswrapper[7146]: I0318 13:22:04.002885 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5"} err="failed to get container status \"9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5\": rpc error: code = NotFound desc = could not find container \"9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5\": container with ID starting with 9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5 not found: ID does not exist" Mar 18 13:22:04.002909 master-0 kubenswrapper[7146]: I0318 13:22:04.002905 7146 scope.go:117] "RemoveContainer" containerID="af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1" Mar 18 13:22:04.003254 master-0 kubenswrapper[7146]: E0318 13:22:04.003191 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1\": container with ID starting with af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1 not found: ID does not exist" containerID="af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1" Mar 18 13:22:04.003313 master-0 kubenswrapper[7146]: I0318 13:22:04.003248 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1"} err="failed to get container status \"af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1\": rpc error: code = NotFound desc = could not find container \"af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1\": container with ID starting with af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1 not found: ID does not exist" Mar 18 13:22:04.003313 master-0 kubenswrapper[7146]: I0318 13:22:04.003265 7146 scope.go:117] "RemoveContainer" containerID="88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930" Mar 18 13:22:04.003628 master-0 kubenswrapper[7146]: I0318 13:22:04.003589 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930"} err="failed to get container status \"88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930\": rpc error: code = NotFound desc = could not find container \"88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930\": container with ID starting with 88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930 not found: ID does not exist" Mar 18 13:22:04.003628 master-0 kubenswrapper[7146]: I0318 13:22:04.003616 7146 scope.go:117] "RemoveContainer" containerID="2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253" Mar 18 13:22:04.003994 master-0 kubenswrapper[7146]: I0318 13:22:04.003963 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253"} err="failed to get container status \"2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253\": rpc error: code = NotFound desc = could not find container \"2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253\": container with ID starting with 2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253 not found: ID does not exist" Mar 18 13:22:04.004056 master-0 kubenswrapper[7146]: I0318 13:22:04.003986 7146 scope.go:117] "RemoveContainer" containerID="5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b" Mar 18 13:22:04.004378 master-0 kubenswrapper[7146]: I0318 13:22:04.004339 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b"} err="failed to get container status \"5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b\": rpc error: code = NotFound desc = could not find container \"5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b\": container with ID starting with 5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b not found: ID does not exist" Mar 18 13:22:04.004378 master-0 kubenswrapper[7146]: I0318 13:22:04.004363 7146 scope.go:117] "RemoveContainer" containerID="532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29" Mar 18 13:22:04.004898 master-0 kubenswrapper[7146]: I0318 13:22:04.004844 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29"} err="failed to get container status \"532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29\": rpc error: code = NotFound desc = could not find container \"532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29\": container with ID starting with 532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29 not found: ID does not exist" Mar 18 13:22:04.004898 master-0 kubenswrapper[7146]: I0318 13:22:04.004886 7146 scope.go:117] "RemoveContainer" containerID="eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731" Mar 18 13:22:04.005193 master-0 kubenswrapper[7146]: I0318 13:22:04.005155 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731"} err="failed to get container status \"eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731\": rpc error: code = NotFound desc = could not find container \"eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731\": container with ID starting with eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731 not found: ID does not exist" Mar 18 13:22:04.005193 master-0 kubenswrapper[7146]: I0318 13:22:04.005182 7146 scope.go:117] "RemoveContainer" containerID="9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5" Mar 18 13:22:04.005498 master-0 kubenswrapper[7146]: I0318 13:22:04.005446 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5"} err="failed to get container status \"9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5\": rpc error: code = NotFound desc = could not find container \"9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5\": container with ID starting with 9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5 not found: ID does not exist" Mar 18 13:22:04.005498 master-0 kubenswrapper[7146]: I0318 13:22:04.005474 7146 scope.go:117] "RemoveContainer" containerID="af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1" Mar 18 13:22:04.005878 master-0 kubenswrapper[7146]: I0318 13:22:04.005842 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1"} err="failed to get container status \"af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1\": rpc error: code = NotFound desc = could not find container \"af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1\": container with ID starting with af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1 not found: ID does not exist" Mar 18 13:22:04.005878 master-0 kubenswrapper[7146]: I0318 13:22:04.005865 7146 scope.go:117] "RemoveContainer" containerID="88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930" Mar 18 13:22:04.006162 master-0 kubenswrapper[7146]: I0318 13:22:04.006125 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930"} err="failed to get container status \"88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930\": rpc error: code = NotFound desc = could not find container \"88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930\": container with ID starting with 88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930 not found: ID does not exist" Mar 18 13:22:04.006162 master-0 kubenswrapper[7146]: I0318 13:22:04.006152 7146 scope.go:117] "RemoveContainer" containerID="2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253" Mar 18 13:22:04.007157 master-0 kubenswrapper[7146]: I0318 13:22:04.007120 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253"} err="failed to get container status \"2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253\": rpc error: code = NotFound desc = could not find container \"2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253\": container with ID starting with 2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253 not found: ID does not exist" Mar 18 13:22:04.007157 master-0 kubenswrapper[7146]: I0318 13:22:04.007144 7146 scope.go:117] "RemoveContainer" containerID="5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b" Mar 18 13:22:04.007529 master-0 kubenswrapper[7146]: I0318 13:22:04.007495 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b"} err="failed to get container status \"5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b\": rpc error: code = NotFound desc = could not find container \"5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b\": container with ID starting with 5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b not found: ID does not exist" Mar 18 13:22:04.007529 master-0 kubenswrapper[7146]: I0318 13:22:04.007515 7146 scope.go:117] "RemoveContainer" containerID="532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29" Mar 18 13:22:04.007816 master-0 kubenswrapper[7146]: I0318 13:22:04.007783 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29"} err="failed to get container status \"532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29\": rpc error: code = NotFound desc = could not find container \"532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29\": container with ID starting with 532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29 not found: ID does not exist" Mar 18 13:22:04.007816 master-0 kubenswrapper[7146]: I0318 13:22:04.007805 7146 scope.go:117] "RemoveContainer" containerID="eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731" Mar 18 13:22:04.008103 master-0 kubenswrapper[7146]: I0318 13:22:04.008070 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731"} err="failed to get container status \"eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731\": rpc error: code = NotFound desc = could not find container \"eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731\": container with ID starting with eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731 not found: ID does not exist" Mar 18 13:22:04.008103 master-0 kubenswrapper[7146]: I0318 13:22:04.008093 7146 scope.go:117] "RemoveContainer" containerID="9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5" Mar 18 13:22:04.008377 master-0 kubenswrapper[7146]: I0318 13:22:04.008341 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5"} err="failed to get container status \"9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5\": rpc error: code = NotFound desc = could not find container \"9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5\": container with ID starting with 9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5 not found: ID does not exist" Mar 18 13:22:04.008377 master-0 kubenswrapper[7146]: I0318 13:22:04.008364 7146 scope.go:117] "RemoveContainer" containerID="af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1" Mar 18 13:22:04.008626 master-0 kubenswrapper[7146]: I0318 13:22:04.008592 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1"} err="failed to get container status \"af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1\": rpc error: code = NotFound desc = could not find container \"af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1\": container with ID starting with af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1 not found: ID does not exist" Mar 18 13:22:04.008626 master-0 kubenswrapper[7146]: I0318 13:22:04.008611 7146 scope.go:117] "RemoveContainer" containerID="88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930" Mar 18 13:22:04.009315 master-0 kubenswrapper[7146]: I0318 13:22:04.008978 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930"} err="failed to get container status \"88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930\": rpc error: code = NotFound desc = could not find container \"88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930\": container with ID starting with 88856140d260dc8e670ea300a8b04712e3a7b5a9416ac8abe6dc8d86ed7ac930 not found: ID does not exist" Mar 18 13:22:04.009315 master-0 kubenswrapper[7146]: I0318 13:22:04.009003 7146 scope.go:117] "RemoveContainer" containerID="2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253" Mar 18 13:22:04.009483 master-0 kubenswrapper[7146]: I0318 13:22:04.009357 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253"} err="failed to get container status \"2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253\": rpc error: code = NotFound desc = could not find container \"2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253\": container with ID starting with 2e5a894242a2db15163049099cdf6fffdab420bc1976166d2c3db9ac86eda253 not found: ID does not exist" Mar 18 13:22:04.009483 master-0 kubenswrapper[7146]: I0318 13:22:04.009378 7146 scope.go:117] "RemoveContainer" containerID="5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b" Mar 18 13:22:04.009752 master-0 kubenswrapper[7146]: I0318 13:22:04.009718 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b"} err="failed to get container status \"5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b\": rpc error: code = NotFound desc = could not find container \"5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b\": container with ID starting with 5804b22dfa1a05c130e25bb1654dd370741a13e4da8e62672e772b0a1f13152b not found: ID does not exist" Mar 18 13:22:04.009817 master-0 kubenswrapper[7146]: I0318 13:22:04.009740 7146 scope.go:117] "RemoveContainer" containerID="532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29" Mar 18 13:22:04.010071 master-0 kubenswrapper[7146]: I0318 13:22:04.010028 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29"} err="failed to get container status \"532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29\": rpc error: code = NotFound desc = could not find container \"532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29\": container with ID starting with 532ab4e064e2dd45125f9e018e6d48bda6f6bfc6dd6ccd2d54a7e38276ef7e29 not found: ID does not exist" Mar 18 13:22:04.010071 master-0 kubenswrapper[7146]: I0318 13:22:04.010052 7146 scope.go:117] "RemoveContainer" containerID="eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731" Mar 18 13:22:04.010484 master-0 kubenswrapper[7146]: I0318 13:22:04.010450 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731"} err="failed to get container status \"eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731\": rpc error: code = NotFound desc = could not find container \"eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731\": container with ID starting with eb2b8381ec9585aed54db87d1e7fda3cad64f87ceb827124c9f79afe1bfcb731 not found: ID does not exist" Mar 18 13:22:04.010484 master-0 kubenswrapper[7146]: I0318 13:22:04.010477 7146 scope.go:117] "RemoveContainer" containerID="9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5" Mar 18 13:22:04.010743 master-0 kubenswrapper[7146]: I0318 13:22:04.010705 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5"} err="failed to get container status \"9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5\": rpc error: code = NotFound desc = could not find container \"9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5\": container with ID starting with 9d2f8fff2a542fc4a5048acb840a85d1e1fe1e2a59f0af576607d7f909d657d5 not found: ID does not exist" Mar 18 13:22:04.010743 master-0 kubenswrapper[7146]: I0318 13:22:04.010728 7146 scope.go:117] "RemoveContainer" containerID="af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1" Mar 18 13:22:04.010991 master-0 kubenswrapper[7146]: I0318 13:22:04.010923 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1"} err="failed to get container status \"af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1\": rpc error: code = NotFound desc = could not find container \"af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1\": container with ID starting with af31458f7eb6997f13d0a185a22e7faa677f008bfd98f9398896c3dff57d1ac1 not found: ID does not exist" Mar 18 13:22:04.327398 master-0 kubenswrapper[7146]: I0318 13:22:04.327333 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:04.327398 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:22:04.327398 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:22:04.327398 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:22:04.327752 master-0 kubenswrapper[7146]: I0318 13:22:04.327402 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:05.181263 master-0 kubenswrapper[7146]: I0318 13:22:05.181200 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:22:05.207364 master-0 kubenswrapper[7146]: I0318 13:22:05.207300 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-kubelet-dir\") pod \"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8\" (UID: \"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8\") " Mar 18 13:22:05.207364 master-0 kubenswrapper[7146]: I0318 13:22:05.207367 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-var-lock\") pod \"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8\" (UID: \"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8\") " Mar 18 13:22:05.207615 master-0 kubenswrapper[7146]: I0318 13:22:05.207404 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-kube-api-access\") pod \"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8\" (UID: \"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8\") " Mar 18 13:22:05.207615 master-0 kubenswrapper[7146]: I0318 13:22:05.207424 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-var-lock" (OuterVolumeSpecName: "var-lock") pod "89d262b4-b1a7-49b8-a8d2-1bb1ea671df8" (UID: "89d262b4-b1a7-49b8-a8d2-1bb1ea671df8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:22:05.207615 master-0 kubenswrapper[7146]: I0318 13:22:05.207476 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "89d262b4-b1a7-49b8-a8d2-1bb1ea671df8" (UID: "89d262b4-b1a7-49b8-a8d2-1bb1ea671df8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:22:05.207615 master-0 kubenswrapper[7146]: I0318 13:22:05.207615 7146 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:05.207778 master-0 kubenswrapper[7146]: I0318 13:22:05.207631 7146 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:05.210113 master-0 kubenswrapper[7146]: I0318 13:22:05.210069 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "89d262b4-b1a7-49b8-a8d2-1bb1ea671df8" (UID: "89d262b4-b1a7-49b8-a8d2-1bb1ea671df8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:22:05.308923 master-0 kubenswrapper[7146]: I0318 13:22:05.308860 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89d262b4-b1a7-49b8-a8d2-1bb1ea671df8-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:05.328019 master-0 kubenswrapper[7146]: I0318 13:22:05.327977 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:05.328019 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:22:05.328019 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:22:05.328019 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:22:05.328254 master-0 kubenswrapper[7146]: I0318 13:22:05.328037 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:05.366684 master-0 kubenswrapper[7146]: I0318 13:22:05.366621 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88d0f62c0688ab1909dc97f30d381b9" path="/var/lib/kubelet/pods/f88d0f62c0688ab1909dc97f30d381b9/volumes" Mar 18 13:22:05.891081 master-0 kubenswrapper[7146]: I0318 13:22:05.891046 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"89d262b4-b1a7-49b8-a8d2-1bb1ea671df8","Type":"ContainerDied","Data":"161cd3706961d9a83285893d6e92c33d138d2abea441f91387021bb04fef5a38"} Mar 18 13:22:05.891081 master-0 kubenswrapper[7146]: I0318 13:22:05.891081 7146 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="161cd3706961d9a83285893d6e92c33d138d2abea441f91387021bb04fef5a38" Mar 18 13:22:05.891407 master-0 kubenswrapper[7146]: I0318 13:22:05.891115 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:22:06.327317 master-0 kubenswrapper[7146]: I0318 13:22:06.327262 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:06.327317 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:22:06.327317 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:22:06.327317 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:22:06.328007 master-0 kubenswrapper[7146]: I0318 13:22:06.327326 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:06.328007 master-0 kubenswrapper[7146]: I0318 13:22:06.327377 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:22:06.328007 master-0 kubenswrapper[7146]: I0318 13:22:06.327856 7146 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"0aa6e3b114a8524f519800cee5439f1ad3e156a1def4a154cf20f82ebe9a3ef2"} pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" containerMessage="Container router failed startup probe, will be restarted" Mar 18 13:22:06.328007 master-0 kubenswrapper[7146]: I0318 13:22:06.327885 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" containerID="cri-o://0aa6e3b114a8524f519800cee5439f1ad3e156a1def4a154cf20f82ebe9a3ef2" gracePeriod=3600 Mar 18 13:22:16.357769 master-0 kubenswrapper[7146]: I0318 13:22:16.357690 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:16.383881 master-0 kubenswrapper[7146]: I0318 13:22:16.383831 7146 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ff983604-e68a-4261-b111-d96cca987ed9" Mar 18 13:22:16.383881 master-0 kubenswrapper[7146]: I0318 13:22:16.383877 7146 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ff983604-e68a-4261-b111-d96cca987ed9" Mar 18 13:22:16.401284 master-0 kubenswrapper[7146]: I0318 13:22:16.401188 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:22:16.410108 master-0 kubenswrapper[7146]: I0318 13:22:16.407035 7146 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:16.420358 master-0 kubenswrapper[7146]: I0318 13:22:16.418524 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:22:16.423143 master-0 kubenswrapper[7146]: I0318 13:22:16.423104 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:16.427225 master-0 kubenswrapper[7146]: I0318 13:22:16.427186 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:22:16.449860 master-0 kubenswrapper[7146]: W0318 13:22:16.449811 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode47f97eb0a0cc5aac7e96e57325228c9.slice/crio-37139cfc3c201b83f82c4c778e201e9e4fa5f476ed738dc1d77b51b256fa3f72 WatchSource:0}: Error finding container 37139cfc3c201b83f82c4c778e201e9e4fa5f476ed738dc1d77b51b256fa3f72: Status 404 returned error can't find the container with id 37139cfc3c201b83f82c4c778e201e9e4fa5f476ed738dc1d77b51b256fa3f72 Mar 18 13:22:16.966923 master-0 kubenswrapper[7146]: I0318 13:22:16.966851 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e47f97eb0a0cc5aac7e96e57325228c9","Type":"ContainerStarted","Data":"b9ab4da2bf00eddad01601b81bba9f16f6744134ee63b0910cd8e62f9b4a3e0d"} Mar 18 13:22:16.966923 master-0 kubenswrapper[7146]: I0318 13:22:16.966927 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e47f97eb0a0cc5aac7e96e57325228c9","Type":"ContainerStarted","Data":"4f190a1e5cc84fa7af8fb29dad5d8ad4c967b2e4627e9634fba3c046d5f350df"} Mar 18 13:22:16.967139 master-0 kubenswrapper[7146]: I0318 13:22:16.966960 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e47f97eb0a0cc5aac7e96e57325228c9","Type":"ContainerStarted","Data":"37139cfc3c201b83f82c4c778e201e9e4fa5f476ed738dc1d77b51b256fa3f72"} Mar 18 13:22:17.974697 master-0 kubenswrapper[7146]: I0318 13:22:17.974618 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e47f97eb0a0cc5aac7e96e57325228c9","Type":"ContainerStarted","Data":"87b86f2af8e501ae34658be585500655faa626562bf4927f068e08991f40d160"} Mar 18 13:22:17.974697 master-0 kubenswrapper[7146]: I0318 13:22:17.974663 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e47f97eb0a0cc5aac7e96e57325228c9","Type":"ContainerStarted","Data":"e140efc28fb74fa94c1d843a6f6a44466dcb4914a6c8eada7179bb0663b14c56"} Mar 18 13:22:19.989456 master-0 kubenswrapper[7146]: I0318 13:22:19.989382 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-zvsmb_906c0fd3-3bcd-4c6c-8505-b3517bae06b4/multus-admission-controller/0.log" Mar 18 13:22:19.989456 master-0 kubenswrapper[7146]: I0318 13:22:19.989440 7146 generic.go:334] "Generic (PLEG): container finished" podID="906c0fd3-3bcd-4c6c-8505-b3517bae06b4" containerID="edd40514a1f5f31c013c470064966c977a9ede25c673b02694bc6dccf5bde6b4" exitCode=137 Mar 18 13:22:19.990108 master-0 kubenswrapper[7146]: I0318 13:22:19.989471 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" event={"ID":"906c0fd3-3bcd-4c6c-8505-b3517bae06b4","Type":"ContainerDied","Data":"edd40514a1f5f31c013c470064966c977a9ede25c673b02694bc6dccf5bde6b4"} Mar 18 13:22:20.190243 master-0 kubenswrapper[7146]: I0318 13:22:20.190200 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-zvsmb_906c0fd3-3bcd-4c6c-8505-b3517bae06b4/multus-admission-controller/0.log" Mar 18 13:22:20.190390 master-0 kubenswrapper[7146]: I0318 13:22:20.190269 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:22:20.208289 master-0 kubenswrapper[7146]: I0318 13:22:20.208175 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=4.2081573 podStartE2EDuration="4.2081573s" podCreationTimestamp="2026-03-18 13:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:22:18.003967187 +0000 UTC m=+846.812184588" watchObservedRunningTime="2026-03-18 13:22:20.2081573 +0000 UTC m=+849.016374661" Mar 18 13:22:20.315355 master-0 kubenswrapper[7146]: I0318 13:22:20.315290 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") pod \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " Mar 18 13:22:20.315563 master-0 kubenswrapper[7146]: I0318 13:22:20.315518 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgh46\" (UniqueName: \"kubernetes.io/projected/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-kube-api-access-rgh46\") pod \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\" (UID: \"906c0fd3-3bcd-4c6c-8505-b3517bae06b4\") " Mar 18 13:22:20.318709 master-0 kubenswrapper[7146]: I0318 13:22:20.318649 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-kube-api-access-rgh46" (OuterVolumeSpecName: "kube-api-access-rgh46") pod "906c0fd3-3bcd-4c6c-8505-b3517bae06b4" (UID: "906c0fd3-3bcd-4c6c-8505-b3517bae06b4"). InnerVolumeSpecName "kube-api-access-rgh46". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:22:20.318838 master-0 kubenswrapper[7146]: I0318 13:22:20.318733 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "906c0fd3-3bcd-4c6c-8505-b3517bae06b4" (UID: "906c0fd3-3bcd-4c6c-8505-b3517bae06b4"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:22:20.417346 master-0 kubenswrapper[7146]: I0318 13:22:20.417190 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgh46\" (UniqueName: \"kubernetes.io/projected/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-kube-api-access-rgh46\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:20.417346 master-0 kubenswrapper[7146]: I0318 13:22:20.417244 7146 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/906c0fd3-3bcd-4c6c-8505-b3517bae06b4-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:20.998169 master-0 kubenswrapper[7146]: I0318 13:22:20.997553 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-zvsmb_906c0fd3-3bcd-4c6c-8505-b3517bae06b4/multus-admission-controller/0.log" Mar 18 13:22:20.998725 master-0 kubenswrapper[7146]: I0318 13:22:20.998309 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" event={"ID":"906c0fd3-3bcd-4c6c-8505-b3517bae06b4","Type":"ContainerDied","Data":"70c9337d8980b38a9bfe7fac6f297ccd5982f9e26f0d9055d4cd37b7726d2727"} Mar 18 13:22:20.998725 master-0 kubenswrapper[7146]: I0318 13:22:20.998362 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb" Mar 18 13:22:20.998725 master-0 kubenswrapper[7146]: I0318 13:22:20.998410 7146 scope.go:117] "RemoveContainer" containerID="cca16efb9d54bc951cd9ba818f02d1594b6f1d22556ab9b15b457bd617b1b96c" Mar 18 13:22:21.013958 master-0 kubenswrapper[7146]: I0318 13:22:21.013903 7146 scope.go:117] "RemoveContainer" containerID="edd40514a1f5f31c013c470064966c977a9ede25c673b02694bc6dccf5bde6b4" Mar 18 13:22:21.034317 master-0 kubenswrapper[7146]: I0318 13:22:21.034224 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb"] Mar 18 13:22:21.037768 master-0 kubenswrapper[7146]: I0318 13:22:21.037698 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-zvsmb"] Mar 18 13:22:21.365917 master-0 kubenswrapper[7146]: I0318 13:22:21.365796 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="906c0fd3-3bcd-4c6c-8505-b3517bae06b4" path="/var/lib/kubelet/pods/906c0fd3-3bcd-4c6c-8505-b3517bae06b4/volumes" Mar 18 13:22:24.965971 master-0 kubenswrapper[7146]: I0318 13:22:24.963040 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 13:22:24.965971 master-0 kubenswrapper[7146]: E0318 13:22:24.963383 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d262b4-b1a7-49b8-a8d2-1bb1ea671df8" containerName="installer" Mar 18 13:22:24.965971 master-0 kubenswrapper[7146]: I0318 13:22:24.963404 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d262b4-b1a7-49b8-a8d2-1bb1ea671df8" containerName="installer" Mar 18 13:22:24.965971 master-0 kubenswrapper[7146]: E0318 13:22:24.963445 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906c0fd3-3bcd-4c6c-8505-b3517bae06b4" containerName="kube-rbac-proxy" Mar 18 13:22:24.965971 master-0 kubenswrapper[7146]: I0318 13:22:24.963456 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="906c0fd3-3bcd-4c6c-8505-b3517bae06b4" containerName="kube-rbac-proxy" Mar 18 13:22:24.965971 master-0 kubenswrapper[7146]: E0318 13:22:24.963473 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906c0fd3-3bcd-4c6c-8505-b3517bae06b4" containerName="multus-admission-controller" Mar 18 13:22:24.965971 master-0 kubenswrapper[7146]: I0318 13:22:24.963482 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="906c0fd3-3bcd-4c6c-8505-b3517bae06b4" containerName="multus-admission-controller" Mar 18 13:22:24.965971 master-0 kubenswrapper[7146]: I0318 13:22:24.963646 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="906c0fd3-3bcd-4c6c-8505-b3517bae06b4" containerName="kube-rbac-proxy" Mar 18 13:22:24.965971 master-0 kubenswrapper[7146]: I0318 13:22:24.963668 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d262b4-b1a7-49b8-a8d2-1bb1ea671df8" containerName="installer" Mar 18 13:22:24.965971 master-0 kubenswrapper[7146]: I0318 13:22:24.963679 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="906c0fd3-3bcd-4c6c-8505-b3517bae06b4" containerName="multus-admission-controller" Mar 18 13:22:24.965971 master-0 kubenswrapper[7146]: I0318 13:22:24.964211 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:24.967343 master-0 kubenswrapper[7146]: I0318 13:22:24.966008 7146 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-n2lc2" Mar 18 13:22:24.967343 master-0 kubenswrapper[7146]: I0318 13:22:24.966968 7146 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 13:22:24.988508 master-0 kubenswrapper[7146]: I0318 13:22:24.988445 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 13:22:25.094152 master-0 kubenswrapper[7146]: I0318 13:22:25.094080 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"44dfc4d5-d5ec-428b-8a3e-64a2eb914951\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:25.094399 master-0 kubenswrapper[7146]: I0318 13:22:25.094168 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"44dfc4d5-d5ec-428b-8a3e-64a2eb914951\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:25.094399 master-0 kubenswrapper[7146]: I0318 13:22:25.094209 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"44dfc4d5-d5ec-428b-8a3e-64a2eb914951\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:25.195523 master-0 kubenswrapper[7146]: I0318 13:22:25.195483 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"44dfc4d5-d5ec-428b-8a3e-64a2eb914951\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:25.195801 master-0 kubenswrapper[7146]: I0318 13:22:25.195787 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"44dfc4d5-d5ec-428b-8a3e-64a2eb914951\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:25.195902 master-0 kubenswrapper[7146]: I0318 13:22:25.195860 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"44dfc4d5-d5ec-428b-8a3e-64a2eb914951\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:25.195902 master-0 kubenswrapper[7146]: I0318 13:22:25.195877 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"44dfc4d5-d5ec-428b-8a3e-64a2eb914951\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:25.195995 master-0 kubenswrapper[7146]: I0318 13:22:25.195637 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"44dfc4d5-d5ec-428b-8a3e-64a2eb914951\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:25.212723 master-0 kubenswrapper[7146]: I0318 13:22:25.212689 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"44dfc4d5-d5ec-428b-8a3e-64a2eb914951\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:25.290154 master-0 kubenswrapper[7146]: I0318 13:22:25.290024 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:25.676567 master-0 kubenswrapper[7146]: I0318 13:22:25.676528 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 13:22:26.037904 master-0 kubenswrapper[7146]: I0318 13:22:26.037832 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"44dfc4d5-d5ec-428b-8a3e-64a2eb914951","Type":"ContainerStarted","Data":"07f6882d5f86ec0a1d009fa1b8302a58daf4a1da8b04abbcb62a0525d3306a28"} Mar 18 13:22:26.037904 master-0 kubenswrapper[7146]: I0318 13:22:26.037887 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"44dfc4d5-d5ec-428b-8a3e-64a2eb914951","Type":"ContainerStarted","Data":"aad525c76f44376de2a8991cff03337651841a12e026f90ff55d949af9034986"} Mar 18 13:22:26.183922 master-0 kubenswrapper[7146]: I0318 13:22:26.183848 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:22:26.215011 master-0 kubenswrapper[7146]: I0318 13:22:26.214813 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=2.214775861 podStartE2EDuration="2.214775861s" podCreationTimestamp="2026-03-18 13:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:22:26.067346263 +0000 UTC m=+854.875563704" watchObservedRunningTime="2026-03-18 13:22:26.214775861 +0000 UTC m=+855.022993262" Mar 18 13:22:26.423230 master-0 kubenswrapper[7146]: I0318 13:22:26.423174 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:26.423230 master-0 kubenswrapper[7146]: I0318 13:22:26.423231 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:26.423230 master-0 kubenswrapper[7146]: I0318 13:22:26.423243 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:26.423516 master-0 kubenswrapper[7146]: I0318 13:22:26.423255 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:26.428024 master-0 kubenswrapper[7146]: I0318 13:22:26.427974 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:26.428769 master-0 kubenswrapper[7146]: I0318 13:22:26.428740 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:27.048449 master-0 kubenswrapper[7146]: I0318 13:22:27.048393 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:27.048985 master-0 kubenswrapper[7146]: I0318 13:22:27.048969 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:30.964460 master-0 kubenswrapper[7146]: I0318 13:22:30.964383 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 13:22:30.965406 master-0 kubenswrapper[7146]: I0318 13:22:30.964657 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="44dfc4d5-d5ec-428b-8a3e-64a2eb914951" containerName="installer" containerID="cri-o://07f6882d5f86ec0a1d009fa1b8302a58daf4a1da8b04abbcb62a0525d3306a28" gracePeriod=30 Mar 18 13:22:35.556277 master-0 kubenswrapper[7146]: I0318 13:22:35.555982 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 13:22:35.557225 master-0 kubenswrapper[7146]: I0318 13:22:35.557192 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:35.570317 master-0 kubenswrapper[7146]: I0318 13:22:35.570255 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 13:22:35.639396 master-0 kubenswrapper[7146]: I0318 13:22:35.639337 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-var-lock\") pod \"installer-2-master-0\" (UID: \"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:35.639669 master-0 kubenswrapper[7146]: I0318 13:22:35.639412 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-kube-api-access\") pod \"installer-2-master-0\" (UID: \"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:35.639669 master-0 kubenswrapper[7146]: I0318 13:22:35.639574 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:35.741463 master-0 kubenswrapper[7146]: I0318 13:22:35.741403 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:35.741687 master-0 kubenswrapper[7146]: I0318 13:22:35.741484 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-var-lock\") pod \"installer-2-master-0\" (UID: \"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:35.741687 master-0 kubenswrapper[7146]: I0318 13:22:35.741544 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-var-lock\") pod \"installer-2-master-0\" (UID: \"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:35.741687 master-0 kubenswrapper[7146]: I0318 13:22:35.741581 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:35.741687 master-0 kubenswrapper[7146]: I0318 13:22:35.741654 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-kube-api-access\") pod \"installer-2-master-0\" (UID: \"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:35.763051 master-0 kubenswrapper[7146]: I0318 13:22:35.762998 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-kube-api-access\") pod \"installer-2-master-0\" (UID: \"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:35.873909 master-0 kubenswrapper[7146]: I0318 13:22:35.873795 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:36.277880 master-0 kubenswrapper[7146]: I0318 13:22:36.277815 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 13:22:37.134477 master-0 kubenswrapper[7146]: I0318 13:22:37.134014 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd","Type":"ContainerStarted","Data":"de5b6ec661068267f89b6ec90d271bd5876f115181b95ff21a8427b142da4a16"} Mar 18 13:22:37.134477 master-0 kubenswrapper[7146]: I0318 13:22:37.134074 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd","Type":"ContainerStarted","Data":"c7ba061487669e6787813d49fe7d9dd26c5ebebcf3ab048d8b5ccac6a43cd677"} Mar 18 13:22:47.574720 master-0 kubenswrapper[7146]: I0318 13:22:47.574646 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=12.57463095 podStartE2EDuration="12.57463095s" podCreationTimestamp="2026-03-18 13:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:22:37.149976169 +0000 UTC m=+865.958193550" watchObservedRunningTime="2026-03-18 13:22:47.57463095 +0000 UTC m=+876.382848311" Mar 18 13:22:47.578117 master-0 kubenswrapper[7146]: I0318 13:22:47.577961 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 13:22:47.578260 master-0 kubenswrapper[7146]: I0318 13:22:47.578147 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="853d3e80-8ef1-47aa-86b4-82b2eb17f6dd" containerName="installer" containerID="cri-o://de5b6ec661068267f89b6ec90d271bd5876f115181b95ff21a8427b142da4a16" gracePeriod=30 Mar 18 13:22:47.947872 master-0 kubenswrapper[7146]: I0318 13:22:47.947627 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_853d3e80-8ef1-47aa-86b4-82b2eb17f6dd/installer/0.log" Mar 18 13:22:47.947872 master-0 kubenswrapper[7146]: I0318 13:22:47.947690 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:48.014703 master-0 kubenswrapper[7146]: I0318 13:22:48.014634 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-kubelet-dir\") pod \"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd\" (UID: \"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd\") " Mar 18 13:22:48.014919 master-0 kubenswrapper[7146]: I0318 13:22:48.014787 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "853d3e80-8ef1-47aa-86b4-82b2eb17f6dd" (UID: "853d3e80-8ef1-47aa-86b4-82b2eb17f6dd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:22:48.014919 master-0 kubenswrapper[7146]: I0318 13:22:48.014806 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-kube-api-access\") pod \"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd\" (UID: \"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd\") " Mar 18 13:22:48.015015 master-0 kubenswrapper[7146]: I0318 13:22:48.014987 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-var-lock\") pod \"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd\" (UID: \"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd\") " Mar 18 13:22:48.015144 master-0 kubenswrapper[7146]: I0318 13:22:48.015079 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-var-lock" (OuterVolumeSpecName: "var-lock") pod "853d3e80-8ef1-47aa-86b4-82b2eb17f6dd" (UID: "853d3e80-8ef1-47aa-86b4-82b2eb17f6dd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:22:48.015579 master-0 kubenswrapper[7146]: I0318 13:22:48.015538 7146 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:48.015579 master-0 kubenswrapper[7146]: I0318 13:22:48.015572 7146 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:48.017367 master-0 kubenswrapper[7146]: I0318 13:22:48.017334 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "853d3e80-8ef1-47aa-86b4-82b2eb17f6dd" (UID: "853d3e80-8ef1-47aa-86b4-82b2eb17f6dd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:22:48.116911 master-0 kubenswrapper[7146]: I0318 13:22:48.116833 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:48.209228 master-0 kubenswrapper[7146]: I0318 13:22:48.208985 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_853d3e80-8ef1-47aa-86b4-82b2eb17f6dd/installer/0.log" Mar 18 13:22:48.209228 master-0 kubenswrapper[7146]: I0318 13:22:48.209039 7146 generic.go:334] "Generic (PLEG): container finished" podID="853d3e80-8ef1-47aa-86b4-82b2eb17f6dd" containerID="de5b6ec661068267f89b6ec90d271bd5876f115181b95ff21a8427b142da4a16" exitCode=1 Mar 18 13:22:48.209228 master-0 kubenswrapper[7146]: I0318 13:22:48.209069 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd","Type":"ContainerDied","Data":"de5b6ec661068267f89b6ec90d271bd5876f115181b95ff21a8427b142da4a16"} Mar 18 13:22:48.209228 master-0 kubenswrapper[7146]: I0318 13:22:48.209099 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"853d3e80-8ef1-47aa-86b4-82b2eb17f6dd","Type":"ContainerDied","Data":"c7ba061487669e6787813d49fe7d9dd26c5ebebcf3ab048d8b5ccac6a43cd677"} Mar 18 13:22:48.209228 master-0 kubenswrapper[7146]: I0318 13:22:48.209106 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:48.209228 master-0 kubenswrapper[7146]: I0318 13:22:48.209116 7146 scope.go:117] "RemoveContainer" containerID="de5b6ec661068267f89b6ec90d271bd5876f115181b95ff21a8427b142da4a16" Mar 18 13:22:48.226391 master-0 kubenswrapper[7146]: I0318 13:22:48.226351 7146 scope.go:117] "RemoveContainer" containerID="de5b6ec661068267f89b6ec90d271bd5876f115181b95ff21a8427b142da4a16" Mar 18 13:22:48.226979 master-0 kubenswrapper[7146]: E0318 13:22:48.226914 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de5b6ec661068267f89b6ec90d271bd5876f115181b95ff21a8427b142da4a16\": container with ID starting with de5b6ec661068267f89b6ec90d271bd5876f115181b95ff21a8427b142da4a16 not found: ID does not exist" containerID="de5b6ec661068267f89b6ec90d271bd5876f115181b95ff21a8427b142da4a16" Mar 18 13:22:48.227043 master-0 kubenswrapper[7146]: I0318 13:22:48.226985 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de5b6ec661068267f89b6ec90d271bd5876f115181b95ff21a8427b142da4a16"} err="failed to get container status \"de5b6ec661068267f89b6ec90d271bd5876f115181b95ff21a8427b142da4a16\": rpc error: code = NotFound desc = could not find container \"de5b6ec661068267f89b6ec90d271bd5876f115181b95ff21a8427b142da4a16\": container with ID starting with de5b6ec661068267f89b6ec90d271bd5876f115181b95ff21a8427b142da4a16 not found: ID does not exist" Mar 18 13:22:48.272959 master-0 kubenswrapper[7146]: I0318 13:22:48.263546 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 13:22:48.272959 master-0 kubenswrapper[7146]: I0318 13:22:48.268208 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 13:22:49.370316 master-0 kubenswrapper[7146]: I0318 13:22:49.370266 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="853d3e80-8ef1-47aa-86b4-82b2eb17f6dd" path="/var/lib/kubelet/pods/853d3e80-8ef1-47aa-86b4-82b2eb17f6dd/volumes" Mar 18 13:22:51.560108 master-0 kubenswrapper[7146]: I0318 13:22:51.560042 7146 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 13:22:51.560708 master-0 kubenswrapper[7146]: E0318 13:22:51.560345 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853d3e80-8ef1-47aa-86b4-82b2eb17f6dd" containerName="installer" Mar 18 13:22:51.560708 master-0 kubenswrapper[7146]: I0318 13:22:51.560363 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="853d3e80-8ef1-47aa-86b4-82b2eb17f6dd" containerName="installer" Mar 18 13:22:51.560708 master-0 kubenswrapper[7146]: I0318 13:22:51.560513 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="853d3e80-8ef1-47aa-86b4-82b2eb17f6dd" containerName="installer" Mar 18 13:22:51.561218 master-0 kubenswrapper[7146]: I0318 13:22:51.561026 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:22:51.575778 master-0 kubenswrapper[7146]: I0318 13:22:51.575726 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 13:22:51.663850 master-0 kubenswrapper[7146]: I0318 13:22:51.663670 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:22:51.664104 master-0 kubenswrapper[7146]: I0318 13:22:51.663869 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:22:51.664104 master-0 kubenswrapper[7146]: I0318 13:22:51.663993 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-var-lock\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:22:51.765227 master-0 kubenswrapper[7146]: I0318 13:22:51.765173 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:22:51.765227 master-0 kubenswrapper[7146]: I0318 13:22:51.765227 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:22:51.765504 master-0 kubenswrapper[7146]: I0318 13:22:51.765464 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-var-lock\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:22:51.765542 master-0 kubenswrapper[7146]: I0318 13:22:51.765522 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:22:51.765616 master-0 kubenswrapper[7146]: I0318 13:22:51.765588 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-var-lock\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:22:51.780412 master-0 kubenswrapper[7146]: I0318 13:22:51.780364 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:22:51.883152 master-0 kubenswrapper[7146]: I0318 13:22:51.883014 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:22:52.291733 master-0 kubenswrapper[7146]: I0318 13:22:52.291678 7146 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 13:22:52.298320 master-0 kubenswrapper[7146]: W0318 13:22:52.298259 7146 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod810ed1fb_bd32_4e5d_94e6_011f21ff37d3.slice/crio-6b30879eb2b02b10f5626375185ef5a50b2b5911613002b67fd621b1c5c99680 WatchSource:0}: Error finding container 6b30879eb2b02b10f5626375185ef5a50b2b5911613002b67fd621b1c5c99680: Status 404 returned error can't find the container with id 6b30879eb2b02b10f5626375185ef5a50b2b5911613002b67fd621b1c5c99680 Mar 18 13:22:53.251295 master-0 kubenswrapper[7146]: I0318 13:22:53.251212 7146 generic.go:334] "Generic (PLEG): container finished" podID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerID="0aa6e3b114a8524f519800cee5439f1ad3e156a1def4a154cf20f82ebe9a3ef2" exitCode=0 Mar 18 13:22:53.251295 master-0 kubenswrapper[7146]: I0318 13:22:53.251259 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" event={"ID":"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf","Type":"ContainerDied","Data":"0aa6e3b114a8524f519800cee5439f1ad3e156a1def4a154cf20f82ebe9a3ef2"} Mar 18 13:22:53.252518 master-0 kubenswrapper[7146]: I0318 13:22:53.251341 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" event={"ID":"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf","Type":"ContainerStarted","Data":"152c21402e0c84100b815841fd5ece1ff6269c57b77042b40be4bf62dd9300e0"} Mar 18 13:22:53.252518 master-0 kubenswrapper[7146]: I0318 13:22:53.251373 7146 scope.go:117] "RemoveContainer" containerID="f8a3caa2163025eca93eda965504b4cc6018d77ba7a2820b766d5ff6236b73e8" Mar 18 13:22:53.255441 master-0 kubenswrapper[7146]: I0318 13:22:53.255352 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"810ed1fb-bd32-4e5d-94e6-011f21ff37d3","Type":"ContainerStarted","Data":"476726e8baea3eb0038921569d3e349c70ed11ed86a08818d39ebf2ee00767e9"} Mar 18 13:22:53.255623 master-0 kubenswrapper[7146]: I0318 13:22:53.255441 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"810ed1fb-bd32-4e5d-94e6-011f21ff37d3","Type":"ContainerStarted","Data":"6b30879eb2b02b10f5626375185ef5a50b2b5911613002b67fd621b1c5c99680"} Mar 18 13:22:53.326190 master-0 kubenswrapper[7146]: I0318 13:22:53.326013 7146 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:22:53.327921 master-0 kubenswrapper[7146]: I0318 13:22:53.327882 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:53.327921 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:22:53.327921 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:22:53.327921 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:22:53.328133 master-0 kubenswrapper[7146]: I0318 13:22:53.327986 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:53.436305 master-0 kubenswrapper[7146]: I0318 13:22:53.436232 7146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.4362112590000002 podStartE2EDuration="2.436211259s" podCreationTimestamp="2026-03-18 13:22:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:22:53.43162121 +0000 UTC m=+882.239838591" watchObservedRunningTime="2026-03-18 13:22:53.436211259 +0000 UTC m=+882.244428620" Mar 18 13:22:54.326122 master-0 kubenswrapper[7146]: I0318 13:22:54.326056 7146 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:22:54.329133 master-0 kubenswrapper[7146]: I0318 13:22:54.329060 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:54.329133 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:22:54.329133 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:22:54.329133 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:22:54.329133 master-0 kubenswrapper[7146]: I0318 13:22:54.329120 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:55.328221 master-0 kubenswrapper[7146]: I0318 13:22:55.328164 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:55.328221 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:22:55.328221 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:22:55.328221 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:22:55.328802 master-0 kubenswrapper[7146]: I0318 13:22:55.328247 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:56.327173 master-0 kubenswrapper[7146]: I0318 13:22:56.327117 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:56.327173 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:22:56.327173 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:22:56.327173 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:22:56.327461 master-0 kubenswrapper[7146]: I0318 13:22:56.327173 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:56.946522 master-0 kubenswrapper[7146]: I0318 13:22:56.946473 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_44dfc4d5-d5ec-428b-8a3e-64a2eb914951/installer/0.log" Mar 18 13:22:56.947138 master-0 kubenswrapper[7146]: I0318 13:22:56.946548 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:57.037021 master-0 kubenswrapper[7146]: I0318 13:22:57.036897 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-kube-api-access\") pod \"44dfc4d5-d5ec-428b-8a3e-64a2eb914951\" (UID: \"44dfc4d5-d5ec-428b-8a3e-64a2eb914951\") " Mar 18 13:22:57.037021 master-0 kubenswrapper[7146]: I0318 13:22:57.036978 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-kubelet-dir\") pod \"44dfc4d5-d5ec-428b-8a3e-64a2eb914951\" (UID: \"44dfc4d5-d5ec-428b-8a3e-64a2eb914951\") " Mar 18 13:22:57.037329 master-0 kubenswrapper[7146]: I0318 13:22:57.037044 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "44dfc4d5-d5ec-428b-8a3e-64a2eb914951" (UID: "44dfc4d5-d5ec-428b-8a3e-64a2eb914951"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:22:57.037329 master-0 kubenswrapper[7146]: I0318 13:22:57.037107 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-var-lock\") pod \"44dfc4d5-d5ec-428b-8a3e-64a2eb914951\" (UID: \"44dfc4d5-d5ec-428b-8a3e-64a2eb914951\") " Mar 18 13:22:57.037329 master-0 kubenswrapper[7146]: I0318 13:22:57.037206 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-var-lock" (OuterVolumeSpecName: "var-lock") pod "44dfc4d5-d5ec-428b-8a3e-64a2eb914951" (UID: "44dfc4d5-d5ec-428b-8a3e-64a2eb914951"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:22:57.037463 master-0 kubenswrapper[7146]: I0318 13:22:57.037341 7146 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:57.037463 master-0 kubenswrapper[7146]: I0318 13:22:57.037355 7146 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:57.039789 master-0 kubenswrapper[7146]: I0318 13:22:57.039650 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "44dfc4d5-d5ec-428b-8a3e-64a2eb914951" (UID: "44dfc4d5-d5ec-428b-8a3e-64a2eb914951"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:22:57.139162 master-0 kubenswrapper[7146]: I0318 13:22:57.139092 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44dfc4d5-d5ec-428b-8a3e-64a2eb914951-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:57.285077 master-0 kubenswrapper[7146]: I0318 13:22:57.284387 7146 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_44dfc4d5-d5ec-428b-8a3e-64a2eb914951/installer/0.log" Mar 18 13:22:57.285077 master-0 kubenswrapper[7146]: I0318 13:22:57.284434 7146 generic.go:334] "Generic (PLEG): container finished" podID="44dfc4d5-d5ec-428b-8a3e-64a2eb914951" containerID="07f6882d5f86ec0a1d009fa1b8302a58daf4a1da8b04abbcb62a0525d3306a28" exitCode=1 Mar 18 13:22:57.285077 master-0 kubenswrapper[7146]: I0318 13:22:57.284463 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"44dfc4d5-d5ec-428b-8a3e-64a2eb914951","Type":"ContainerDied","Data":"07f6882d5f86ec0a1d009fa1b8302a58daf4a1da8b04abbcb62a0525d3306a28"} Mar 18 13:22:57.285077 master-0 kubenswrapper[7146]: I0318 13:22:57.284489 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"44dfc4d5-d5ec-428b-8a3e-64a2eb914951","Type":"ContainerDied","Data":"aad525c76f44376de2a8991cff03337651841a12e026f90ff55d949af9034986"} Mar 18 13:22:57.285077 master-0 kubenswrapper[7146]: I0318 13:22:57.284507 7146 scope.go:117] "RemoveContainer" containerID="07f6882d5f86ec0a1d009fa1b8302a58daf4a1da8b04abbcb62a0525d3306a28" Mar 18 13:22:57.285077 master-0 kubenswrapper[7146]: I0318 13:22:57.284618 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:57.303112 master-0 kubenswrapper[7146]: I0318 13:22:57.303072 7146 scope.go:117] "RemoveContainer" containerID="07f6882d5f86ec0a1d009fa1b8302a58daf4a1da8b04abbcb62a0525d3306a28" Mar 18 13:22:57.304019 master-0 kubenswrapper[7146]: E0318 13:22:57.303850 7146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07f6882d5f86ec0a1d009fa1b8302a58daf4a1da8b04abbcb62a0525d3306a28\": container with ID starting with 07f6882d5f86ec0a1d009fa1b8302a58daf4a1da8b04abbcb62a0525d3306a28 not found: ID does not exist" containerID="07f6882d5f86ec0a1d009fa1b8302a58daf4a1da8b04abbcb62a0525d3306a28" Mar 18 13:22:57.304019 master-0 kubenswrapper[7146]: I0318 13:22:57.303947 7146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07f6882d5f86ec0a1d009fa1b8302a58daf4a1da8b04abbcb62a0525d3306a28"} err="failed to get container status \"07f6882d5f86ec0a1d009fa1b8302a58daf4a1da8b04abbcb62a0525d3306a28\": rpc error: code = NotFound desc = could not find container \"07f6882d5f86ec0a1d009fa1b8302a58daf4a1da8b04abbcb62a0525d3306a28\": container with ID starting with 07f6882d5f86ec0a1d009fa1b8302a58daf4a1da8b04abbcb62a0525d3306a28 not found: ID does not exist" Mar 18 13:22:57.325683 master-0 kubenswrapper[7146]: I0318 13:22:57.325612 7146 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 13:22:57.328364 master-0 kubenswrapper[7146]: I0318 13:22:57.328319 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:57.328364 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:22:57.328364 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:22:57.328364 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:22:57.328605 master-0 kubenswrapper[7146]: I0318 13:22:57.328381 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:57.331397 master-0 kubenswrapper[7146]: I0318 13:22:57.331337 7146 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 13:22:57.365616 master-0 kubenswrapper[7146]: I0318 13:22:57.365562 7146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44dfc4d5-d5ec-428b-8a3e-64a2eb914951" path="/var/lib/kubelet/pods/44dfc4d5-d5ec-428b-8a3e-64a2eb914951/volumes" Mar 18 13:22:58.328045 master-0 kubenswrapper[7146]: I0318 13:22:58.327971 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:58.328045 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:22:58.328045 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:22:58.328045 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:22:58.328874 master-0 kubenswrapper[7146]: I0318 13:22:58.328089 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:59.327399 master-0 kubenswrapper[7146]: I0318 13:22:59.327329 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:59.327399 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:22:59.327399 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:22:59.327399 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:22:59.328000 master-0 kubenswrapper[7146]: I0318 13:22:59.327964 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:00.327965 master-0 kubenswrapper[7146]: I0318 13:23:00.327884 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:00.327965 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:00.327965 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:00.327965 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:00.328629 master-0 kubenswrapper[7146]: I0318 13:23:00.328001 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:01.326897 master-0 kubenswrapper[7146]: I0318 13:23:01.326846 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:01.326897 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:01.326897 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:01.326897 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:01.327232 master-0 kubenswrapper[7146]: I0318 13:23:01.326920 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:02.327561 master-0 kubenswrapper[7146]: I0318 13:23:02.327494 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:02.327561 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:02.327561 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:02.327561 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:02.328210 master-0 kubenswrapper[7146]: I0318 13:23:02.327567 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:03.328724 master-0 kubenswrapper[7146]: I0318 13:23:03.328669 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:03.328724 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:03.328724 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:03.328724 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:03.329461 master-0 kubenswrapper[7146]: I0318 13:23:03.328738 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:04.328702 master-0 kubenswrapper[7146]: I0318 13:23:04.328640 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:04.328702 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:04.328702 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:04.328702 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:04.329537 master-0 kubenswrapper[7146]: I0318 13:23:04.328703 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:05.327053 master-0 kubenswrapper[7146]: I0318 13:23:05.326993 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:05.327053 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:05.327053 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:05.327053 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:05.327452 master-0 kubenswrapper[7146]: I0318 13:23:05.327054 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:06.327800 master-0 kubenswrapper[7146]: I0318 13:23:06.327745 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:06.327800 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:06.327800 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:06.327800 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:06.328872 master-0 kubenswrapper[7146]: I0318 13:23:06.328572 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:07.327995 master-0 kubenswrapper[7146]: I0318 13:23:07.327890 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:07.327995 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:07.327995 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:07.327995 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:07.328578 master-0 kubenswrapper[7146]: I0318 13:23:07.328044 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:08.327623 master-0 kubenswrapper[7146]: I0318 13:23:08.327570 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:08.327623 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:08.327623 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:08.327623 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:08.328021 master-0 kubenswrapper[7146]: I0318 13:23:08.327650 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:09.327898 master-0 kubenswrapper[7146]: I0318 13:23:09.327840 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:09.327898 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:09.327898 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:09.327898 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:09.328610 master-0 kubenswrapper[7146]: I0318 13:23:09.327918 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:10.327532 master-0 kubenswrapper[7146]: I0318 13:23:10.327403 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:10.327532 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:10.327532 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:10.327532 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:10.327945 master-0 kubenswrapper[7146]: I0318 13:23:10.327890 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:11.327312 master-0 kubenswrapper[7146]: I0318 13:23:11.327265 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:11.327312 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:11.327312 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:11.327312 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:11.328065 master-0 kubenswrapper[7146]: I0318 13:23:11.327339 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:11.691377 master-0 kubenswrapper[7146]: I0318 13:23:11.691344 7146 scope.go:117] "RemoveContainer" containerID="311fa0a837fab2a478663d760de17d2a8ddc702068f88e4f3d424a59411456ff" Mar 18 13:23:12.328461 master-0 kubenswrapper[7146]: I0318 13:23:12.328334 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:12.328461 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:12.328461 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:12.328461 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:12.329111 master-0 kubenswrapper[7146]: I0318 13:23:12.328471 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:13.327158 master-0 kubenswrapper[7146]: I0318 13:23:13.327101 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:13.327158 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:13.327158 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:13.327158 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:13.327483 master-0 kubenswrapper[7146]: I0318 13:23:13.327170 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:14.327421 master-0 kubenswrapper[7146]: I0318 13:23:14.327343 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:14.327421 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:14.327421 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:14.327421 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:14.328179 master-0 kubenswrapper[7146]: I0318 13:23:14.327450 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:15.327482 master-0 kubenswrapper[7146]: I0318 13:23:15.327432 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:15.327482 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:15.327482 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:15.327482 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:15.328206 master-0 kubenswrapper[7146]: I0318 13:23:15.327522 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:16.328132 master-0 kubenswrapper[7146]: I0318 13:23:16.328085 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:16.328132 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:16.328132 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:16.328132 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:16.328871 master-0 kubenswrapper[7146]: I0318 13:23:16.328161 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:17.328115 master-0 kubenswrapper[7146]: I0318 13:23:17.328075 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:17.328115 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:17.328115 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:17.328115 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:17.328671 master-0 kubenswrapper[7146]: I0318 13:23:17.328134 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:18.328553 master-0 kubenswrapper[7146]: I0318 13:23:18.328480 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:18.328553 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:18.328553 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:18.328553 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:18.329253 master-0 kubenswrapper[7146]: I0318 13:23:18.328592 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:19.328190 master-0 kubenswrapper[7146]: I0318 13:23:19.328074 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:19.328190 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:19.328190 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:19.328190 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:19.328519 master-0 kubenswrapper[7146]: I0318 13:23:19.328232 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:20.327071 master-0 kubenswrapper[7146]: I0318 13:23:20.327012 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:20.327071 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:20.327071 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:20.327071 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:20.327743 master-0 kubenswrapper[7146]: I0318 13:23:20.327076 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:21.328031 master-0 kubenswrapper[7146]: I0318 13:23:21.327970 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:21.328031 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:21.328031 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:21.328031 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:21.328586 master-0 kubenswrapper[7146]: I0318 13:23:21.328051 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:22.327838 master-0 kubenswrapper[7146]: I0318 13:23:22.327775 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:22.327838 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:22.327838 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:22.327838 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:22.328540 master-0 kubenswrapper[7146]: I0318 13:23:22.327861 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:23.328735 master-0 kubenswrapper[7146]: I0318 13:23:23.328668 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:23.328735 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:23.328735 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:23.328735 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:23.329449 master-0 kubenswrapper[7146]: I0318 13:23:23.328762 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:24.327737 master-0 kubenswrapper[7146]: I0318 13:23:24.327687 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:24.327737 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:24.327737 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:24.327737 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:24.328089 master-0 kubenswrapper[7146]: I0318 13:23:24.327749 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:25.331244 master-0 kubenswrapper[7146]: I0318 13:23:25.331191 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:25.331244 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:25.331244 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:25.331244 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:25.331771 master-0 kubenswrapper[7146]: I0318 13:23:25.331292 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:26.327007 master-0 kubenswrapper[7146]: I0318 13:23:26.326855 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:26.327007 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:26.327007 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:26.327007 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:26.327007 master-0 kubenswrapper[7146]: I0318 13:23:26.326922 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:27.327841 master-0 kubenswrapper[7146]: I0318 13:23:27.327747 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:27.327841 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:27.327841 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:27.327841 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:27.328475 master-0 kubenswrapper[7146]: I0318 13:23:27.327872 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:28.328090 master-0 kubenswrapper[7146]: I0318 13:23:28.328011 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:28.328090 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:28.328090 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:28.328090 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:28.328801 master-0 kubenswrapper[7146]: I0318 13:23:28.328097 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:29.328241 master-0 kubenswrapper[7146]: I0318 13:23:29.328178 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:29.328241 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:29.328241 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:29.328241 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:29.328845 master-0 kubenswrapper[7146]: I0318 13:23:29.328260 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:30.327642 master-0 kubenswrapper[7146]: I0318 13:23:30.327568 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:30.327642 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:30.327642 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:30.327642 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:30.327900 master-0 kubenswrapper[7146]: I0318 13:23:30.327638 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:31.327355 master-0 kubenswrapper[7146]: I0318 13:23:31.327276 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:31.327355 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:31.327355 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:31.327355 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:31.327355 master-0 kubenswrapper[7146]: I0318 13:23:31.327352 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:32.327990 master-0 kubenswrapper[7146]: I0318 13:23:32.327912 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:32.327990 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:32.327990 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:32.327990 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:32.328625 master-0 kubenswrapper[7146]: I0318 13:23:32.328024 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:33.328954 master-0 kubenswrapper[7146]: I0318 13:23:33.328875 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:33.328954 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:33.328954 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:33.328954 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:33.329507 master-0 kubenswrapper[7146]: I0318 13:23:33.328969 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:34.328079 master-0 kubenswrapper[7146]: I0318 13:23:34.327996 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:34.328079 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:34.328079 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:34.328079 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:34.328548 master-0 kubenswrapper[7146]: I0318 13:23:34.328078 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:35.328030 master-0 kubenswrapper[7146]: I0318 13:23:35.327990 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:35.328030 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:35.328030 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:35.328030 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:35.328687 master-0 kubenswrapper[7146]: I0318 13:23:35.328654 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:36.328401 master-0 kubenswrapper[7146]: I0318 13:23:36.328315 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:36.328401 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:36.328401 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:36.328401 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:36.329161 master-0 kubenswrapper[7146]: I0318 13:23:36.328426 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:37.327478 master-0 kubenswrapper[7146]: I0318 13:23:37.327390 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:37.327478 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:37.327478 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:37.327478 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:37.327835 master-0 kubenswrapper[7146]: I0318 13:23:37.327481 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:38.328050 master-0 kubenswrapper[7146]: I0318 13:23:38.327980 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:38.328050 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:38.328050 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:38.328050 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:38.328050 master-0 kubenswrapper[7146]: I0318 13:23:38.328050 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:39.327647 master-0 kubenswrapper[7146]: I0318 13:23:39.327514 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:39.327647 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:39.327647 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:39.327647 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:39.327647 master-0 kubenswrapper[7146]: I0318 13:23:39.327602 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:40.252922 master-0 kubenswrapper[7146]: I0318 13:23:40.252849 7146 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:23:40.253486 master-0 kubenswrapper[7146]: E0318 13:23:40.253215 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44dfc4d5-d5ec-428b-8a3e-64a2eb914951" containerName="installer" Mar 18 13:23:40.253486 master-0 kubenswrapper[7146]: I0318 13:23:40.253233 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="44dfc4d5-d5ec-428b-8a3e-64a2eb914951" containerName="installer" Mar 18 13:23:40.253486 master-0 kubenswrapper[7146]: I0318 13:23:40.253392 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="44dfc4d5-d5ec-428b-8a3e-64a2eb914951" containerName="installer" Mar 18 13:23:40.253928 master-0 kubenswrapper[7146]: I0318 13:23:40.253894 7146 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 13:23:40.254131 master-0 kubenswrapper[7146]: I0318 13:23:40.254078 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.254294 master-0 kubenswrapper[7146]: I0318 13:23:40.254230 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a907a02503b5df781613b6da0961b359781cced0221882a7b1a1568fee1b84fe" gracePeriod=15 Mar 18 13:23:40.254459 master-0 kubenswrapper[7146]: I0318 13:23:40.254414 7146 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" containerID="cri-o://7ddc54cddedd2bdae32224357d62187da26cebbd3a01e7a295c7e87fef85c020" gracePeriod=15 Mar 18 13:23:40.255230 master-0 kubenswrapper[7146]: I0318 13:23:40.254876 7146 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:23:40.255457 master-0 kubenswrapper[7146]: E0318 13:23:40.255440 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 13:23:40.255523 master-0 kubenswrapper[7146]: I0318 13:23:40.255461 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 13:23:40.255523 master-0 kubenswrapper[7146]: E0318 13:23:40.255490 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 13:23:40.255523 master-0 kubenswrapper[7146]: I0318 13:23:40.255499 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 13:23:40.255523 master-0 kubenswrapper[7146]: E0318 13:23:40.255519 7146 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 13:23:40.255637 master-0 kubenswrapper[7146]: I0318 13:23:40.255528 7146 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 13:23:40.255689 master-0 kubenswrapper[7146]: I0318 13:23:40.255674 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 13:23:40.255877 master-0 kubenswrapper[7146]: I0318 13:23:40.255691 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 13:23:40.255877 master-0 kubenswrapper[7146]: I0318 13:23:40.255718 7146 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 13:23:40.258177 master-0 kubenswrapper[7146]: I0318 13:23:40.257534 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:40.303315 master-0 kubenswrapper[7146]: E0318 13:23:40.303218 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.321986 master-0 kubenswrapper[7146]: E0318 13:23:40.321897 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:40.331967 master-0 kubenswrapper[7146]: I0318 13:23:40.330518 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:40.331967 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:40.331967 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:40.331967 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:40.331967 master-0 kubenswrapper[7146]: I0318 13:23:40.330586 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:40.339978 master-0 kubenswrapper[7146]: I0318 13:23:40.339906 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.339978 master-0 kubenswrapper[7146]: I0318 13:23:40.339963 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.340207 master-0 kubenswrapper[7146]: I0318 13:23:40.339996 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.340207 master-0 kubenswrapper[7146]: I0318 13:23:40.340020 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:40.340207 master-0 kubenswrapper[7146]: I0318 13:23:40.340037 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:40.340207 master-0 kubenswrapper[7146]: I0318 13:23:40.340053 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.340207 master-0 kubenswrapper[7146]: I0318 13:23:40.340093 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:40.340207 master-0 kubenswrapper[7146]: I0318 13:23:40.340116 7146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.365267 master-0 kubenswrapper[7146]: I0318 13:23:40.365184 7146 patch_prober.go:28] interesting pod/bootstrap-kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" start-of-body= Mar 18 13:23:40.365267 master-0 kubenswrapper[7146]: I0318 13:23:40.365239 7146 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:23:40.366173 master-0 kubenswrapper[7146]: E0318 13:23:40.365980 7146 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event=< Mar 18 13:23:40.366173 master-0 kubenswrapper[7146]: &Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df2483f2b8329 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.32.10:6443/readyz": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:23:40.366173 master-0 kubenswrapper[7146]: body: Mar 18 13:23:40.366173 master-0 kubenswrapper[7146]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:23:40.365226793 +0000 UTC m=+929.173444164,LastTimestamp:2026-03-18 13:23:40.365226793 +0000 UTC m=+929.173444164,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 18 13:23:40.366173 master-0 kubenswrapper[7146]: > Mar 18 13:23:40.442397 master-0 kubenswrapper[7146]: I0318 13:23:40.442315 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:40.442397 master-0 kubenswrapper[7146]: I0318 13:23:40.442402 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.442731 master-0 kubenswrapper[7146]: I0318 13:23:40.442448 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.442731 master-0 kubenswrapper[7146]: I0318 13:23:40.442473 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.442731 master-0 kubenswrapper[7146]: I0318 13:23:40.442504 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.442731 master-0 kubenswrapper[7146]: I0318 13:23:40.442523 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:40.442731 master-0 kubenswrapper[7146]: I0318 13:23:40.442539 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:40.442731 master-0 kubenswrapper[7146]: I0318 13:23:40.442555 7146 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.442731 master-0 kubenswrapper[7146]: I0318 13:23:40.442639 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.442731 master-0 kubenswrapper[7146]: I0318 13:23:40.442674 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:40.442731 master-0 kubenswrapper[7146]: I0318 13:23:40.442696 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.442731 master-0 kubenswrapper[7146]: I0318 13:23:40.442720 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.442731 master-0 kubenswrapper[7146]: I0318 13:23:40.442743 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.443459 master-0 kubenswrapper[7146]: I0318 13:23:40.442769 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.443459 master-0 kubenswrapper[7146]: I0318 13:23:40.442790 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:40.443459 master-0 kubenswrapper[7146]: I0318 13:23:40.442811 7146 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:40.546694 master-0 kubenswrapper[7146]: I0318 13:23:40.546543 7146 generic.go:334] "Generic (PLEG): container finished" podID="810ed1fb-bd32-4e5d-94e6-011f21ff37d3" containerID="476726e8baea3eb0038921569d3e349c70ed11ed86a08818d39ebf2ee00767e9" exitCode=0 Mar 18 13:23:40.546694 master-0 kubenswrapper[7146]: I0318 13:23:40.546635 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"810ed1fb-bd32-4e5d-94e6-011f21ff37d3","Type":"ContainerDied","Data":"476726e8baea3eb0038921569d3e349c70ed11ed86a08818d39ebf2ee00767e9"} Mar 18 13:23:40.547896 master-0 kubenswrapper[7146]: I0318 13:23:40.547829 7146 status_manager.go:851] "Failed to get status for pod" podUID="810ed1fb-bd32-4e5d-94e6-011f21ff37d3" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:23:40.548645 master-0 kubenswrapper[7146]: I0318 13:23:40.548607 7146 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="a907a02503b5df781613b6da0961b359781cced0221882a7b1a1568fee1b84fe" exitCode=0 Mar 18 13:23:40.604855 master-0 kubenswrapper[7146]: I0318 13:23:40.604802 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:40.623131 master-0 kubenswrapper[7146]: I0318 13:23:40.623005 7146 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:40.896029 master-0 kubenswrapper[7146]: E0318 13:23:40.895786 7146 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event=< Mar 18 13:23:40.896029 master-0 kubenswrapper[7146]: &Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df2483f2b8329 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.32.10:6443/readyz": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:23:40.896029 master-0 kubenswrapper[7146]: body: Mar 18 13:23:40.896029 master-0 kubenswrapper[7146]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:23:40.365226793 +0000 UTC m=+929.173444164,LastTimestamp:2026-03-18 13:23:40.365226793 +0000 UTC m=+929.173444164,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 18 13:23:40.896029 master-0 kubenswrapper[7146]: > Mar 18 13:23:41.328303 master-0 kubenswrapper[7146]: I0318 13:23:41.328185 7146 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:41.328303 master-0 kubenswrapper[7146]: [-]has-synced failed: reason withheld Mar 18 13:23:41.328303 master-0 kubenswrapper[7146]: [+]process-running ok Mar 18 13:23:41.328303 master-0 kubenswrapper[7146]: healthz check failed Mar 18 13:23:41.328904 master-0 kubenswrapper[7146]: I0318 13:23:41.328320 7146 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:41.373986 master-0 kubenswrapper[7146]: I0318 13:23:41.371288 7146 status_manager.go:851] "Failed to get status for pod" podUID="810ed1fb-bd32-4e5d-94e6-011f21ff37d3" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:23:41.555849 master-0 kubenswrapper[7146]: I0318 13:23:41.555744 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"7c596290b16b735fd2873e580d133696dfedc347b3f8e0e91a59ac0b73f33ad7"} Mar 18 13:23:41.555849 master-0 kubenswrapper[7146]: I0318 13:23:41.555810 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"49ae7d75896e0c278ca4b4cb9c4f8b076e025d8e605f566c7b21c0b8fb8bc3f7"} Mar 18 13:23:41.557333 master-0 kubenswrapper[7146]: I0318 13:23:41.557284 7146 status_manager.go:851] "Failed to get status for pod" podUID="810ed1fb-bd32-4e5d-94e6-011f21ff37d3" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:23:41.558205 master-0 kubenswrapper[7146]: E0318 13:23:41.558068 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:41.558205 master-0 kubenswrapper[7146]: I0318 13:23:41.558107 7146 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="e8c91df65ad2a5d542d44c3e9b72528a23b6488107d0629d4bcdb9c771675658" exitCode=0 Mar 18 13:23:41.558205 master-0 kubenswrapper[7146]: I0318 13:23:41.558159 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerDied","Data":"e8c91df65ad2a5d542d44c3e9b72528a23b6488107d0629d4bcdb9c771675658"} Mar 18 13:23:41.558332 master-0 kubenswrapper[7146]: I0318 13:23:41.558225 7146 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"9a15447edfc940cd5b4dde2df7e8e6360f5b93278864866c14e686e33bd8d32a"} Mar 18 13:23:41.559324 master-0 kubenswrapper[7146]: I0318 13:23:41.559282 7146 status_manager.go:851] "Failed to get status for pod" podUID="810ed1fb-bd32-4e5d-94e6-011f21ff37d3" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:23:41.559324 master-0 kubenswrapper[7146]: E0318 13:23:41.559272 7146 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:41.821683 master-0 kubenswrapper[7146]: I0318 13:23:41.821651 7146 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:41.822782 master-0 kubenswrapper[7146]: I0318 13:23:41.822749 7146 status_manager.go:851] "Failed to get status for pod" podUID="810ed1fb-bd32-4e5d-94e6-011f21ff37d3" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:23:41.967613 master-0 kubenswrapper[7146]: I0318 13:23:41.966775 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-var-lock\") pod \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " Mar 18 13:23:41.967613 master-0 kubenswrapper[7146]: I0318 13:23:41.966903 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access\") pod \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " Mar 18 13:23:41.967613 master-0 kubenswrapper[7146]: I0318 13:23:41.966918 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-var-lock" (OuterVolumeSpecName: "var-lock") pod "810ed1fb-bd32-4e5d-94e6-011f21ff37d3" (UID: "810ed1fb-bd32-4e5d-94e6-011f21ff37d3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:23:41.968521 master-0 kubenswrapper[7146]: I0318 13:23:41.968187 7146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kubelet-dir\") pod \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " Mar 18 13:23:41.968521 master-0 kubenswrapper[7146]: I0318 13:23:41.968218 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "810ed1fb-bd32-4e5d-94e6-011f21ff37d3" (UID: "810ed1fb-bd32-4e5d-94e6-011f21ff37d3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:23:41.968521 master-0 kubenswrapper[7146]: I0318 13:23:41.968481 7146 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:23:41.968521 master-0 kubenswrapper[7146]: I0318 13:23:41.968498 7146 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:23:41.969627 master-0 kubenswrapper[7146]: I0318 13:23:41.969599 7146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "810ed1fb-bd32-4e5d-94e6-011f21ff37d3" (UID: "810ed1fb-bd32-4e5d-94e6-011f21ff37d3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:23:42.069202 master-0 kubenswrapper[7146]: I0318 13:23:42.068996 7146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:23:42.268882 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 18 13:23:42.354673 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 18 13:23:42.354995 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 18 13:23:42.372855 master-0 systemd[1]: kubelet.service: Consumed 1min 55.969s CPU time. Mar 18 13:23:42.439269 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 13:23:42.573925 master-0 kubenswrapper[28504]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:23:42.573925 master-0 kubenswrapper[28504]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 13:23:42.573925 master-0 kubenswrapper[28504]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:23:42.574242 master-0 kubenswrapper[28504]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:23:42.574242 master-0 kubenswrapper[28504]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 13:23:42.574242 master-0 kubenswrapper[28504]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:23:42.574242 master-0 kubenswrapper[28504]: I0318 13:23:42.574050 28504 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584252 28504 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584304 28504 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584311 28504 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584316 28504 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584320 28504 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584325 28504 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584330 28504 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584334 28504 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584339 28504 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584343 28504 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584348 28504 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584352 28504 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584357 28504 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584381 28504 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584387 28504 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584392 28504 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584403 28504 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584409 28504 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584414 28504 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:23:42.586433 master-0 kubenswrapper[28504]: W0318 13:23:42.584419 28504 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584425 28504 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584429 28504 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584434 28504 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584439 28504 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584465 28504 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584470 28504 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584475 28504 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584480 28504 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584484 28504 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584489 28504 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584493 28504 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584498 28504 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584502 28504 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584512 28504 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584519 28504 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584543 28504 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584549 28504 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584555 28504 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584560 28504 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:23:42.587126 master-0 kubenswrapper[28504]: W0318 13:23:42.584564 28504 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584568 28504 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584575 28504 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584581 28504 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584588 28504 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584595 28504 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584600 28504 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584628 28504 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584635 28504 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584639 28504 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584645 28504 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584652 28504 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584657 28504 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584662 28504 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584667 28504 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584673 28504 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584679 28504 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584705 28504 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584712 28504 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:23:42.588821 master-0 kubenswrapper[28504]: W0318 13:23:42.584716 28504 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: W0318 13:23:42.584721 28504 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: W0318 13:23:42.584725 28504 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: W0318 13:23:42.584733 28504 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: W0318 13:23:42.584739 28504 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: W0318 13:23:42.584744 28504 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: W0318 13:23:42.584750 28504 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: W0318 13:23:42.584755 28504 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: W0318 13:23:42.584760 28504 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: W0318 13:23:42.584784 28504 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: W0318 13:23:42.584791 28504 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: W0318 13:23:42.584798 28504 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: W0318 13:23:42.584803 28504 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: W0318 13:23:42.584807 28504 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: I0318 13:23:42.584981 28504 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: I0318 13:23:42.584998 28504 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: I0318 13:23:42.585016 28504 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: I0318 13:23:42.585050 28504 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: I0318 13:23:42.585058 28504 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: I0318 13:23:42.585065 28504 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: I0318 13:23:42.585073 28504 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 13:23:42.589643 master-0 kubenswrapper[28504]: I0318 13:23:42.585079 28504 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585085 28504 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585091 28504 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585097 28504 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585127 28504 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585135 28504 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585140 28504 flags.go:64] FLAG: --cgroup-root="" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585146 28504 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585152 28504 flags.go:64] FLAG: --client-ca-file="" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585157 28504 flags.go:64] FLAG: --cloud-config="" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585163 28504 flags.go:64] FLAG: --cloud-provider="" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585168 28504 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585180 28504 flags.go:64] FLAG: --cluster-domain="" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585210 28504 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585217 28504 flags.go:64] FLAG: --config-dir="" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585222 28504 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585228 28504 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585237 28504 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585243 28504 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585250 28504 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585255 28504 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585263 28504 flags.go:64] FLAG: --contention-profiling="false" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585294 28504 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585299 28504 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585305 28504 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 13:23:42.590331 master-0 kubenswrapper[28504]: I0318 13:23:42.585310 28504 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585331 28504 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585337 28504 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585343 28504 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585374 28504 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585380 28504 flags.go:64] FLAG: --enable-server="true" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585386 28504 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585398 28504 flags.go:64] FLAG: --event-burst="100" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585404 28504 flags.go:64] FLAG: --event-qps="50" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585410 28504 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585415 28504 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585420 28504 flags.go:64] FLAG: --eviction-hard="" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585450 28504 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585458 28504 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585464 28504 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585472 28504 flags.go:64] FLAG: --eviction-soft="" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585477 28504 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585482 28504 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585492 28504 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585497 28504 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585503 28504 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585529 28504 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585536 28504 flags.go:64] FLAG: --feature-gates="" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585544 28504 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585550 28504 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 13:23:42.591118 master-0 kubenswrapper[28504]: I0318 13:23:42.585555 28504 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585561 28504 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585566 28504 flags.go:64] FLAG: --healthz-port="10248" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585571 28504 flags.go:64] FLAG: --help="false" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585577 28504 flags.go:64] FLAG: --hostname-override="" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585582 28504 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585587 28504 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585615 28504 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585620 28504 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585626 28504 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585631 28504 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585636 28504 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585650 28504 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585656 28504 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585662 28504 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585687 28504 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585695 28504 flags.go:64] FLAG: --kube-reserved="" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585700 28504 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585706 28504 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585712 28504 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585717 28504 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585722 28504 flags.go:64] FLAG: --lock-file="" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585728 28504 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585733 28504 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585739 28504 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585773 28504 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 13:23:42.591801 master-0 kubenswrapper[28504]: I0318 13:23:42.585780 28504 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585786 28504 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585791 28504 flags.go:64] FLAG: --logging-format="text" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585797 28504 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585803 28504 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585809 28504 flags.go:64] FLAG: --manifest-url="" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585814 28504 flags.go:64] FLAG: --manifest-url-header="" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585823 28504 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585849 28504 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585859 28504 flags.go:64] FLAG: --max-pods="110" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585865 28504 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585870 28504 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585876 28504 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585881 28504 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585886 28504 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585892 28504 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585897 28504 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585931 28504 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585953 28504 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585959 28504 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585964 28504 flags.go:64] FLAG: --pod-cidr="" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.585970 28504 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.586012 28504 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 13:23:42.592429 master-0 kubenswrapper[28504]: I0318 13:23:42.586020 28504 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586026 28504 flags.go:64] FLAG: --pods-per-core="0" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586032 28504 flags.go:64] FLAG: --port="10250" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586037 28504 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586043 28504 flags.go:64] FLAG: --provider-id="" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586048 28504 flags.go:64] FLAG: --qos-reserved="" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586054 28504 flags.go:64] FLAG: --read-only-port="10255" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586060 28504 flags.go:64] FLAG: --register-node="true" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586090 28504 flags.go:64] FLAG: --register-schedulable="true" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586098 28504 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586111 28504 flags.go:64] FLAG: --registry-burst="10" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586117 28504 flags.go:64] FLAG: --registry-qps="5" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586123 28504 flags.go:64] FLAG: --reserved-cpus="" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586128 28504 flags.go:64] FLAG: --reserved-memory="" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586135 28504 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586140 28504 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586146 28504 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586172 28504 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586178 28504 flags.go:64] FLAG: --runonce="false" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586184 28504 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586190 28504 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586195 28504 flags.go:64] FLAG: --seccomp-default="false" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586201 28504 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586206 28504 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586212 28504 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586217 28504 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 13:23:42.593021 master-0 kubenswrapper[28504]: I0318 13:23:42.586223 28504 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586249 28504 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586257 28504 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586262 28504 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586267 28504 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586273 28504 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586279 28504 flags.go:64] FLAG: --system-cgroups="" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586284 28504 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586295 28504 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586307 28504 flags.go:64] FLAG: --tls-cert-file="" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586334 28504 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586344 28504 flags.go:64] FLAG: --tls-min-version="" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586350 28504 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586354 28504 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586365 28504 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586371 28504 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586376 28504 flags.go:64] FLAG: --v="2" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586385 28504 flags.go:64] FLAG: --version="false" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586415 28504 flags.go:64] FLAG: --vmodule="" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586422 28504 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: I0318 13:23:42.586429 28504 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: W0318 13:23:42.586809 28504 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: W0318 13:23:42.586822 28504 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:23:42.593662 master-0 kubenswrapper[28504]: W0318 13:23:42.586828 28504 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586852 28504 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586859 28504 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586864 28504 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586869 28504 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586874 28504 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586888 28504 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586892 28504 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586898 28504 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586904 28504 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586910 28504 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586949 28504 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586956 28504 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586960 28504 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586965 28504 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586970 28504 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586975 28504 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586979 28504 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586984 28504 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:23:42.594360 master-0 kubenswrapper[28504]: W0318 13:23:42.586988 28504 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587013 28504 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587018 28504 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587031 28504 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587040 28504 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587045 28504 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587051 28504 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587055 28504 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587060 28504 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587064 28504 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587069 28504 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587093 28504 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587099 28504 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587104 28504 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587108 28504 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587120 28504 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587125 28504 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587131 28504 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587136 28504 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587141 28504 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:23:42.594864 master-0 kubenswrapper[28504]: W0318 13:23:42.587145 28504 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587150 28504 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587176 28504 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587183 28504 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587189 28504 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587193 28504 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587199 28504 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587204 28504 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587208 28504 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587213 28504 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587217 28504 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587221 28504 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587226 28504 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587230 28504 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587256 28504 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587261 28504 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587268 28504 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587272 28504 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587277 28504 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587289 28504 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:23:42.595374 master-0 kubenswrapper[28504]: W0318 13:23:42.587294 28504 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:23:42.595858 master-0 kubenswrapper[28504]: W0318 13:23:42.587298 28504 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:23:42.595858 master-0 kubenswrapper[28504]: W0318 13:23:42.587303 28504 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:23:42.595858 master-0 kubenswrapper[28504]: W0318 13:23:42.587308 28504 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:23:42.595858 master-0 kubenswrapper[28504]: W0318 13:23:42.587333 28504 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:23:42.595858 master-0 kubenswrapper[28504]: W0318 13:23:42.587339 28504 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:23:42.595858 master-0 kubenswrapper[28504]: W0318 13:23:42.587344 28504 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:23:42.595858 master-0 kubenswrapper[28504]: W0318 13:23:42.587348 28504 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:23:42.595858 master-0 kubenswrapper[28504]: W0318 13:23:42.587353 28504 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:23:42.595858 master-0 kubenswrapper[28504]: W0318 13:23:42.587357 28504 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:23:42.595858 master-0 kubenswrapper[28504]: W0318 13:23:42.587362 28504 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:23:42.595858 master-0 kubenswrapper[28504]: I0318 13:23:42.587379 28504 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:23:42.602649 master-0 kubenswrapper[28504]: I0318 13:23:42.602592 28504 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 13:23:42.602649 master-0 kubenswrapper[28504]: I0318 13:23:42.602639 28504 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 13:23:42.602841 master-0 kubenswrapper[28504]: W0318 13:23:42.602787 28504 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:23:42.602841 master-0 kubenswrapper[28504]: W0318 13:23:42.602796 28504 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:23:42.602841 master-0 kubenswrapper[28504]: W0318 13:23:42.602800 28504 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:23:42.602841 master-0 kubenswrapper[28504]: W0318 13:23:42.602804 28504 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:23:42.602841 master-0 kubenswrapper[28504]: W0318 13:23:42.602811 28504 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:23:42.602841 master-0 kubenswrapper[28504]: W0318 13:23:42.602815 28504 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:23:42.602841 master-0 kubenswrapper[28504]: W0318 13:23:42.602819 28504 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:23:42.602841 master-0 kubenswrapper[28504]: W0318 13:23:42.602823 28504 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:23:42.602841 master-0 kubenswrapper[28504]: W0318 13:23:42.602827 28504 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:23:42.602841 master-0 kubenswrapper[28504]: W0318 13:23:42.602832 28504 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:23:42.602841 master-0 kubenswrapper[28504]: W0318 13:23:42.602841 28504 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:23:42.602841 master-0 kubenswrapper[28504]: W0318 13:23:42.602846 28504 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:23:42.602841 master-0 kubenswrapper[28504]: W0318 13:23:42.602851 28504 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602856 28504 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602860 28504 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602865 28504 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602871 28504 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602875 28504 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602879 28504 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602882 28504 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602886 28504 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602890 28504 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602894 28504 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602897 28504 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602901 28504 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602905 28504 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602910 28504 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602914 28504 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602921 28504 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602924 28504 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602928 28504 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602932 28504 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:23:42.603190 master-0 kubenswrapper[28504]: W0318 13:23:42.602973 28504 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.602977 28504 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.602987 28504 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.602992 28504 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.602996 28504 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.602999 28504 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.603003 28504 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.603008 28504 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.603013 28504 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.603020 28504 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.603024 28504 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.603028 28504 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.603033 28504 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.603037 28504 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.603042 28504 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.603046 28504 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.603050 28504 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.603054 28504 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.603058 28504 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:23:42.603845 master-0 kubenswrapper[28504]: W0318 13:23:42.603061 28504 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603065 28504 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603069 28504 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603075 28504 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603079 28504 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603083 28504 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603086 28504 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603090 28504 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603093 28504 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603098 28504 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603103 28504 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603107 28504 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603111 28504 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603115 28504 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603119 28504 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603125 28504 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603129 28504 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603133 28504 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603137 28504 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603146 28504 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:23:42.604538 master-0 kubenswrapper[28504]: W0318 13:23:42.603151 28504 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:23:42.605211 master-0 kubenswrapper[28504]: I0318 13:23:42.603158 28504 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:23:42.605211 master-0 kubenswrapper[28504]: W0318 13:23:42.603419 28504 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:23:42.605211 master-0 kubenswrapper[28504]: W0318 13:23:42.603432 28504 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:23:42.605211 master-0 kubenswrapper[28504]: W0318 13:23:42.603437 28504 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:23:42.605211 master-0 kubenswrapper[28504]: W0318 13:23:42.603440 28504 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:23:42.605211 master-0 kubenswrapper[28504]: W0318 13:23:42.603445 28504 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:23:42.605211 master-0 kubenswrapper[28504]: W0318 13:23:42.603450 28504 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:23:42.605211 master-0 kubenswrapper[28504]: W0318 13:23:42.603455 28504 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:23:42.605211 master-0 kubenswrapper[28504]: W0318 13:23:42.603459 28504 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:23:42.605211 master-0 kubenswrapper[28504]: W0318 13:23:42.603466 28504 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:23:42.605211 master-0 kubenswrapper[28504]: W0318 13:23:42.603471 28504 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:23:42.605211 master-0 kubenswrapper[28504]: W0318 13:23:42.603475 28504 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:23:42.605211 master-0 kubenswrapper[28504]: W0318 13:23:42.603479 28504 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:23:42.605211 master-0 kubenswrapper[28504]: W0318 13:23:42.603482 28504 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603486 28504 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603490 28504 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603494 28504 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603498 28504 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603501 28504 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603505 28504 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603510 28504 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603517 28504 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603522 28504 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603526 28504 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603530 28504 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603534 28504 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603538 28504 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603542 28504 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603546 28504 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603550 28504 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603554 28504 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603558 28504 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:23:42.605612 master-0 kubenswrapper[28504]: W0318 13:23:42.603562 28504 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603567 28504 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603578 28504 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603582 28504 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603586 28504 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603590 28504 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603593 28504 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603597 28504 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603601 28504 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603605 28504 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603609 28504 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603612 28504 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603616 28504 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603620 28504 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603627 28504 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603630 28504 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603634 28504 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603637 28504 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603641 28504 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603645 28504 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:23:42.606182 master-0 kubenswrapper[28504]: W0318 13:23:42.603648 28504 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603652 28504 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603656 28504 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603659 28504 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603663 28504 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603667 28504 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603673 28504 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603676 28504 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603680 28504 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603683 28504 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603687 28504 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603691 28504 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603694 28504 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603698 28504 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603702 28504 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603707 28504 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603710 28504 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603714 28504 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603723 28504 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603729 28504 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:23:42.606719 master-0 kubenswrapper[28504]: W0318 13:23:42.603733 28504 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:23:42.607445 master-0 kubenswrapper[28504]: I0318 13:23:42.603739 28504 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:23:42.607445 master-0 kubenswrapper[28504]: I0318 13:23:42.603975 28504 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 13:23:42.607445 master-0 kubenswrapper[28504]: I0318 13:23:42.605930 28504 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 18 13:23:42.607445 master-0 kubenswrapper[28504]: I0318 13:23:42.606018 28504 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 18 13:23:42.607445 master-0 kubenswrapper[28504]: I0318 13:23:42.606232 28504 server.go:997] "Starting client certificate rotation" Mar 18 13:23:42.607445 master-0 kubenswrapper[28504]: I0318 13:23:42.606245 28504 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 13:23:42.607445 master-0 kubenswrapper[28504]: I0318 13:23:42.606389 28504 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 12:57:27 +0000 UTC, rotation deadline is 2026-03-19 07:18:58.642915439 +0000 UTC Mar 18 13:23:42.607445 master-0 kubenswrapper[28504]: I0318 13:23:42.606428 28504 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h55m16.036488993s for next certificate rotation Mar 18 13:23:42.607445 master-0 kubenswrapper[28504]: I0318 13:23:42.607282 28504 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 13:23:42.615304 master-0 kubenswrapper[28504]: I0318 13:23:42.615241 28504 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 13:23:42.623630 master-0 kubenswrapper[28504]: I0318 13:23:42.623577 28504 log.go:25] "Validated CRI v1 runtime API" Mar 18 13:23:42.627862 master-0 kubenswrapper[28504]: I0318 13:23:42.627808 28504 log.go:25] "Validated CRI v1 image API" Mar 18 13:23:42.629975 master-0 kubenswrapper[28504]: I0318 13:23:42.629920 28504 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 13:23:42.638190 master-0 kubenswrapper[28504]: I0318 13:23:42.638122 28504 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 81ff0aa5-030f-4028-8e1c-14208afe7bfb:/dev/vda3 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 18 13:23:42.639022 master-0 kubenswrapper[28504]: I0318 13:23:42.638166 28504 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0679fb9ef1dd358deba35d738ff1064e3cdf869b26696ba0d14a1ac6ad26f588/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0679fb9ef1dd358deba35d738ff1064e3cdf869b26696ba0d14a1ac6ad26f588/userdata/shm major:0 minor:730 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/087a43ea54c2e2fbe1816f2c58b08071e419e9384fd7fc0a1f0284ded4111e9a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/087a43ea54c2e2fbe1816f2c58b08071e419e9384fd7fc0a1f0284ded4111e9a/userdata/shm major:0 minor:934 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0dfd132ca6d17d71f64272cbf05802b2cf41d07648dbd09346eab0774ba709b2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0dfd132ca6d17d71f64272cbf05802b2cf41d07648dbd09346eab0774ba709b2/userdata/shm major:0 minor:775 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0e106377d9d72c29f7e269aa5cfc10e2e71a7440e3f167ac189e9be6ef45a160/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0e106377d9d72c29f7e269aa5cfc10e2e71a7440e3f167ac189e9be6ef45a160/userdata/shm major:0 minor:253 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0e5aebf642fb9f996565bf333412adf5ef6e32356850ce107ed2ae531c959857/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0e5aebf642fb9f996565bf333412adf5ef6e32356850ce107ed2ae531c959857/userdata/shm major:0 minor:1173 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/107bbfc4822b298b178da7c2027a8844c3612176c3e5d6fcb31db24eadcd1790/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/107bbfc4822b298b178da7c2027a8844c3612176c3e5d6fcb31db24eadcd1790/userdata/shm major:0 minor:67 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/13d61ed6ba86dc97c981be717623436660fa98958fd1c017e06b3a4ec064f769/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/13d61ed6ba86dc97c981be717623436660fa98958fd1c017e06b3a4ec064f769/userdata/shm major:0 minor:251 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/169cee91f89c2bf08a085d418adf6f39cab225d960227b563e10d5f8629dd9c5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/169cee91f89c2bf08a085d418adf6f39cab225d960227b563e10d5f8629dd9c5/userdata/shm major:0 minor:853 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1a378c5d453113157a9411837f552a7009188322a9c41c64301dc36db4c9e17e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1a378c5d453113157a9411837f552a7009188322a9c41c64301dc36db4c9e17e/userdata/shm major:0 minor:636 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1c625ab74e01dd5316e14886f1962977aaeec6d850dd1b7dad1e5cfa9c9c4cad/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1c625ab74e01dd5316e14886f1962977aaeec6d850dd1b7dad1e5cfa9c9c4cad/userdata/shm major:0 minor:484 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1d24886475858f5204b2d9c7e7c2ee25187c07d787043c501faeeb9daa42c75a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1d24886475858f5204b2d9c7e7c2ee25187c07d787043c501faeeb9daa42c75a/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1f5a6ee5a82f28ebea2649b710d2502f72b2b11fe536e2a60ed0b6577c615a5e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1f5a6ee5a82f28ebea2649b710d2502f72b2b11fe536e2a60ed0b6577c615a5e/userdata/shm major:0 minor:395 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/22211cbad9660f7fa5d4af7845deabe61016d175198690a7f0bcdcb8c8f30f63/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/22211cbad9660f7fa5d4af7845deabe61016d175198690a7f0bcdcb8c8f30f63/userdata/shm major:0 minor:1049 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/27fd370e185ff896bf0edc768c087bcfed286fcd2920b469bb1b45967f2d7e8e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/27fd370e185ff896bf0edc768c087bcfed286fcd2920b469bb1b45967f2d7e8e/userdata/shm major:0 minor:1014 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2bf4b712cae2c0ee4c12f11a2e43506e7388879dae59520e9018e8abfe05f277/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2bf4b712cae2c0ee4c12f11a2e43506e7388879dae59520e9018e8abfe05f277/userdata/shm major:0 minor:255 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2fb5e5e8607f93dafe9cc4e7936985507a00d052cc2ac3e0c096e4455936f109/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2fb5e5e8607f93dafe9cc4e7936985507a00d052cc2ac3e0c096e4455936f109/userdata/shm major:0 minor:786 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/350c4fb60f4e9bdb03e757c1222dc19a3a32f7097be5c0e8e5c054e3859ca25c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/350c4fb60f4e9bdb03e757c1222dc19a3a32f7097be5c0e8e5c054e3859ca25c/userdata/shm major:0 minor:390 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/37139cfc3c201b83f82c4c778e201e9e4fa5f476ed738dc1d77b51b256fa3f72/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/37139cfc3c201b83f82c4c778e201e9e4fa5f476ed738dc1d77b51b256fa3f72/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/41d00c84c5dd2a7b766779b7fb3bdc8cf10f974b3a66441fa0ef99e90cc55075/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/41d00c84c5dd2a7b766779b7fb3bdc8cf10f974b3a66441fa0ef99e90cc55075/userdata/shm major:0 minor:168 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/49ae7d75896e0c278ca4b4cb9c4f8b076e025d8e605f566c7b21c0b8fb8bc3f7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/49ae7d75896e0c278ca4b4cb9c4f8b076e025d8e605f566c7b21c0b8fb8bc3f7/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/513bdda53b682c95f37d2cf2baf57e4a5453627fbbd061d754ec2aa3ba42bd1d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/513bdda53b682c95f37d2cf2baf57e4a5453627fbbd061d754ec2aa3ba42bd1d/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/55d7b7fe63240a7ae9d576fcdf869561b098f0744d49a92d18613fdfb73c8a23/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/55d7b7fe63240a7ae9d576fcdf869561b098f0744d49a92d18613fdfb73c8a23/userdata/shm major:0 minor:391 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6521ed821b17acabe4b6b4013792bafdd43c6335da5eba7b335ddb8b9407cf09/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6521ed821b17acabe4b6b4013792bafdd43c6335da5eba7b335ddb8b9407cf09/userdata/shm major:0 minor:87 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/681658b0b14bf79757ec7e2bf815ef5e737aa8b1612a9d7bf59a35cb9f00495b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/681658b0b14bf79757ec7e2bf815ef5e737aa8b1612a9d7bf59a35cb9f00495b/userdata/shm major:0 minor:356 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6ae0c5f6306fcc2bc4d200c31e8ec02db83741ac24faf2d432c77d6884f24b98/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6ae0c5f6306fcc2bc4d200c31e8ec02db83741ac24faf2d432c77d6884f24b98/userdata/shm major:0 minor:1051 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6c7a102b9c64081966ad588bf6d34058c0849b6b42caa6a8951b5cab3df0847b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6c7a102b9c64081966ad588bf6d34058c0849b6b42caa6a8951b5cab3df0847b/userdata/shm major:0 minor:843 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6f29c4b1c1fd21881be5b0c8c3cbe035d4334c4ad23b7061f15e1ade0751024e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6f29c4b1c1fd21881be5b0c8c3cbe035d4334c4ad23b7061f15e1ade0751024e/userdata/shm major:0 minor:489 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6f72f96fe981864c7efed48f7ec73353e9a984bf6f9e3b23eec1a4ed414c6dbd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6f72f96fe981864c7efed48f7ec73353e9a984bf6f9e3b23eec1a4ed414c6dbd/userdata/shm major:0 minor:972 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6f92bee18602c78e97abff330426051be6816bfa6a663d5ddee07fcf7b81c8a2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6f92bee18602c78e97abff330426051be6816bfa6a663d5ddee07fcf7b81c8a2/userdata/shm major:0 minor:1016 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7425e13d893a722522240c3707c6140f8bfd0028da6287165144b7322ebf69c4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7425e13d893a722522240c3707c6140f8bfd0028da6287165144b7322ebf69c4/userdata/shm major:0 minor:274 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7d26e6fa0dd7672f718650a12401a8514bc9d4479825421f550493d0cc0ccae9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7d26e6fa0dd7672f718650a12401a8514bc9d4479825421f550493d0cc0ccae9/userdata/shm major:0 minor:99 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7dfbe5ed23f58a4b2b795d3c941f199f4ff38f6453094d9db8bcf00a90c533d5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7dfbe5ed23f58a4b2b795d3c941f199f4ff38f6453094d9db8bcf00a90c533d5/userdata/shm major:0 minor:478 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7f19ee16fbfcf73db21dbee51bcb45264558bf405e040985a801120ef73b113c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7f19ee16fbfcf73db21dbee51bcb45264558bf405e040985a801120ef73b113c/userdata/shm major:0 minor:479 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/80f499fc9da9c64215ee87b646e32e10113a3a485956cfb50964722faffe7405/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/80f499fc9da9c64215ee87b646e32e10113a3a485956cfb50964722faffe7405/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8207c4419d89bbef00d1216664ff051dff0278775861444c1650cbc77aa43b89/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8207c4419d89bbef00d1216664ff051dff0278775861444c1650cbc77aa43b89/userdata/shm major:0 minor:261 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8385307c04cfef148742b5dc0fc754e1e2dc3ea11d3ddc8ec5d773d4246273b6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8385307c04cfef148742b5dc0fc754e1e2dc3ea11d3ddc8ec5d773d4246273b6/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/85215d2477770d9146270870ad0a93b56946079eb831a104bd441b36e0111190/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/85215d2477770d9146270870ad0a93b56946079eb831a104bd441b36e0111190/userdata/shm major:0 minor:457 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/880004505fafdd74bc0fa1479c8dc9293b280d360df6bd0f451f11d33a5d6e7c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/880004505fafdd74bc0fa1479c8dc9293b280d360df6bd0f451f11d33a5d6e7c/userdata/shm major:0 minor:787 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8c177b73cce0c7f3cc26e5c3b6432debd234f03c681f0879af00f2a71a8d7119/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8c177b73cce0c7f3cc26e5c3b6432debd234f03c681f0879af00f2a71a8d7119/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8cfa9195fd91aaa41473c2e4d0c90829d891ed3f5c7a55b7f1376df3f2ef829a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8cfa9195fd91aaa41473c2e4d0c90829d891ed3f5c7a55b7f1376df3f2ef829a/userdata/shm major:0 minor:485 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8eb2fe8ff8be73af78d1650987f57fe06fd99e27a3b3400525c12b3ce524c93c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8eb2fe8ff8be73af78d1650987f57fe06fd99e27a3b3400525c12b3ce524c93c/userdata/shm major:0 minor:384 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/93c3e972c1d72b8d1ee15395999be03050512e051706f9a30dccebe0b0487b51/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/93c3e972c1d72b8d1ee15395999be03050512e051706f9a30dccebe0b0487b51/userdata/shm major:0 minor:839 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95165b81eb17d7c2c28d6429f46259466ca6d0bdd237f4679d2704ef98282f29/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95165b81eb17d7c2c28d6429f46259466ca6d0bdd237f4679d2704ef98282f29/userdata/shm major:0 minor:272 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9a15447edfc940cd5b4dde2df7e8e6360f5b93278864866c14e686e33bd8d32a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9a15447edfc940cd5b4dde2df7e8e6360f5b93278864866c14e686e33bd8d32a/userdata/shm major:0 minor:88 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9cdb7659f9e5befc4b423f8f01e97091301553ed5776dec5e04ebf95f793c39d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9cdb7659f9e5befc4b423f8f01e97091301553ed5776dec5e04ebf95f793c39d/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a296b3a4304897ed7576fe44b518ce3d5fa743a93c59d520901b7af6be80a014/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a296b3a4304897ed7576fe44b518ce3d5fa743a93c59d520901b7af6be80a014/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a2f9634bc26fc4102ec0a118fdd84688c4a5ae575980f29492ab02ddd33ee35a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a2f9634bc26fc4102ec0a118fdd84688c4a5ae575980f29492ab02ddd33ee35a/userdata/shm major:0 minor:649 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a44d9eb65400a7e0c0da7a14a1ecf19a155dd4cc1a996834044260457aba64a9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a44d9eb65400a7e0c0da7a14a1ecf19a155dd4cc1a996834044260457aba64a9/userdata/shm major:0 minor:97 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a890ba92b025096e34e81f53a6cf37b1fcac472b14f9584479797572ac09eeb3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a890ba92b025096e34e81f53a6cf37b1fcac472b14f9584479797572ac09eeb3/userdata/shm major:0 minor:856 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ab383e41a32fa393eda648ad2e2329488744a0ae30fb174d7a553710f1fa274a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ab383e41a32fa393eda648ad2e2329488744a0ae30fb174d7a553710f1fa274a/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ae165efde01e25d890b70e74ec7c26c2fa71fdd6d466511fae93c4948c21b840/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ae165efde01e25d890b70e74ec7c26c2fa71fdd6d466511fae93c4948c21b840/userdata/shm major:0 minor:848 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b4561154d0b6ba0fd61becf7cb0b78f50d8ad270a32afdea4927372423c86f1f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b4561154d0b6ba0fd61becf7cb0b78f50d8ad270a32afdea4927372423c86f1f/userdata/shm major:0 minor:452 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b71043687eba73124ca20af7839f57eeabe61687cf875f84c32f9f4a213acec8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b71043687eba73124ca20af7839f57eeabe61687cf875f84c32f9f4a213acec8/userdata/shm major:0 minor:249 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b8a26d81f44c36716262661566f2f3e96301ba61c1175262d41d795c78a4ddc7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b8a26d81f44c36716262661566f2f3e96301ba61c1175262d41d795c78a4ddc7/userdata/shm major:0 minor:333 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bc0482eb4be6db452db71a2c46c144f5403bf6de42eee4937dbcaa45ae804557/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bc0482eb4be6db452db71a2c46c144f5403bf6de42eee4937dbcaa45ae804557/userdata/shm major:0 minor:493 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c0d9adef366d9f45b6f81e678d5b5bc6f1e841f8a49fa5033e91c2416ca478ff/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c0d9adef366d9f45b6f81e678d5b5bc6f1e841f8a49fa5033e91c2416ca478ff/userdata/shm major:0 minor:269 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c2ec5cc34fdd3560c731a9122a146c883bd92213bb1def0bc3e3795f4b6dca24/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c2ec5cc34fdd3560c731a9122a146c883bd92213bb1def0bc3e3795f4b6dca24/userdata/shm major:0 minor:393 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c707555fde8547aced9e196e247cc976f2be6c845c60b160a99fcce91955e9be/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c707555fde8547aced9e196e247cc976f2be6c845c60b160a99fcce91955e9be/userdata/shm major:0 minor:115 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c7de43cf6bf0c5d7b2b878ebc5990ddb62b5d5e375bde178cb4882acdf2057b0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c7de43cf6bf0c5d7b2b878ebc5990ddb62b5d5e375bde178cb4882acdf2057b0/userdata/shm major:0 minor:841 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cad2dea033992ed333b90156af54dbe232cb8e77ea3617a7c7559f870c46bf61/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cad2dea033992ed333b90156af54dbe232cb8e77ea3617a7c7559f870c46bf61/userdata/shm major:0 minor:257 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d2e64e1e8754957863bad8639f4beaf999396133b2b69117105f95cd95cc7cf9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d2e64e1e8754957863bad8639f4beaf999396133b2b69117105f95cd95cc7cf9/userdata/shm major:0 minor:472 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d36791810cb2ff2b559bc157f15e244f7a2e4ce2859637a7bd7a82ed7e5c1136/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d36791810cb2ff2b559bc157f15e244f7a2e4ce2859637a7bd7a82ed7e5c1136/userdata/shm major:0 minor:399 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d74caa04ea7449a0740efe2024b1988d41d0ee2f12b8a3006dbde07602a641f4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d74caa04ea7449a0740efe2024b1988d41d0ee2f12b8a3006dbde07602a641f4/userdata/shm major:0 minor:528 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d7625d2cd327e3cafffe87f32286c7b0cc92c9be78c6e712456c0ec63d1a75aa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d7625d2cd327e3cafffe87f32286c7b0cc92c9be78c6e712456c0ec63d1a75aa/userdata/shm major:0 minor:482 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dbe3dbed53c9224d7868cbcc5f61d9bc6a0fe24d17380d115f4b59ffe8620443/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dbe3dbed53c9224d7868cbcc5f61d9bc6a0fe24d17380d115f4b59ffe8620443/userdata/shm major:0 minor:401 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dfe654b41556fae7663227362582c9c8b439e29f071dbdc91344f393aa640b68/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dfe654b41556fae7663227362582c9c8b439e29f071dbdc91344f393aa640b68/userdata/shm major:0 minor:1082 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e11212398038431cec2938c7b56aa6395f70dc7ec5d7eb01558cbbe8ba561643/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e11212398038431cec2938c7b56aa6395f70dc7ec5d7eb01558cbbe8ba561643/userdata/shm major:0 minor:1084 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f257d90986f3bc5c917783e713efe22ea2b8502b23f0e13b32408883ab3d2ef8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f257d90986f3bc5c917783e713efe22ea2b8502b23f0e13b32408883ab3d2ef8/userdata/shm major:0 minor:1088 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f3d8252ff99e6f3ec6168c39c11836a42f248fb2decc89a0e7aa350479c27f97/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f3d8252ff99e6f3ec6168c39c11836a42f248fb2decc89a0e7aa350479c27f97/userdata/shm major:0 minor:259 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f46be36805215ed01ce43e16395e2577e1a093a401197fa4f4e250af1a9fdef6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f46be36805215ed01ce43e16395e2577e1a093a401197fa4f4e250af1a9fdef6/userdata/shm major:0 minor:1020 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f4b4a14ca39c077cf2f2b693e9d1fdd626838af6ef821c8cff1c28e6bf2b25ae/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f4b4a14ca39c077cf2f2b693e9d1fdd626838af6ef821c8cff1c28e6bf2b25ae/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f66902e008f5e3816231ec2d4e1a0e85eeb3453ed6e4f6ce1b4d241b3bf8e3ac/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f66902e008f5e3816231ec2d4e1a0e85eeb3453ed6e4f6ce1b4d241b3bf8e3ac/userdata/shm major:0 minor:784 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f83db5e28df12811765a35c5caa63df9e480be2ff8b0922b566cffc66ed3f105/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f83db5e28df12811765a35c5caa63df9e480be2ff8b0922b566cffc66ed3f105/userdata/shm major:0 minor:993 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f8cc997e3f27ce3fc910341ff80d8b564acb4ef4acb174e7ab70b72471e906fc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f8cc997e3f27ce3fc910341ff80d8b564acb4ef4acb174e7ab70b72471e906fc/userdata/shm major:0 minor:1130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fac381b9cc8f57cf484158ad2ce85bb2e249b3b6d72791568b3a2df54f9f4083/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fac381b9cc8f57cf484158ad2ce85bb2e249b3b6d72791568b3a2df54f9f4083/userdata/shm major:0 minor:318 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fe76db3e18ee08aeb5e379f2dbbf7788ff4131f5c2267fbb53a962d2c960a57b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fe76db3e18ee08aeb5e379f2dbbf7788ff4131f5c2267fbb53a962d2c960a57b/userdata/shm major:0 minor:502 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/feef592bfb9171a37aa394c51fc21738e74cfa163f594aa5160554c22d6d35c6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/feef592bfb9171a37aa394c51fc21738e74cfa163f594aa5160554c22d6d35c6/userdata/shm major:0 minor:487 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0213214b-693b-411b-8254-48d7826011eb/volumes/kubernetes.io~projected/kube-api-access-xcm8d:{mountpoint:/var/lib/kubelet/pods/0213214b-693b-411b-8254-48d7826011eb/volumes/kubernetes.io~projected/kube-api-access-xcm8d major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0213214b-693b-411b-8254-48d7826011eb/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0213214b-693b-411b-8254-48d7826011eb/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/02879f34-7062-4f07-9a5a-f965103d9182/volumes/kubernetes.io~projected/kube-api-access-jbv4l:{mountpoint:/var/lib/kubelet/pods/02879f34-7062-4f07-9a5a-f965103d9182/volumes/kubernetes.io~projected/kube-api-access-jbv4l major:0 minor:1047 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/02879f34-7062-4f07-9a5a-f965103d9182/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/02879f34-7062-4f07-9a5a-f965103d9182/volumes/kubernetes.io~secret/certs major:0 minor:1046 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/02879f34-7062-4f07-9a5a-f965103d9182/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/02879f34-7062-4f07-9a5a-f965103d9182/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:1045 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0f16e797-a619-46a8-948a-9fdfc8a9891f/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/0f16e797-a619-46a8-948a-9fdfc8a9891f/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:604 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0f16e797-a619-46a8-948a-9fdfc8a9891f/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/0f16e797-a619-46a8-948a-9fdfc8a9891f/volumes/kubernetes.io~empty-dir/tmp major:0 minor:605 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0f16e797-a619-46a8-948a-9fdfc8a9891f/volumes/kubernetes.io~projected/kube-api-access-q6b9b:{mountpoint:/var/lib/kubelet/pods/0f16e797-a619-46a8-948a-9fdfc8a9891f/volumes/kubernetes.io~projected/kube-api-access-q6b9b major:0 minor:613 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16a930da-d793-486f-bcef-cf042d3c427d/volumes/kubernetes.io~projected/kube-api-access-5gv8b:{mountpoint:/var/lib/kubelet/pods/16a930da-d793-486f-bcef-cf042d3c427d/volumes/kubernetes.io~projected/kube-api-access-5gv8b major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16a930da-d793-486f-bcef-cf042d3c427d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/16a930da-d793-486f-bcef-cf042d3c427d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17adbc1a-f29c-4278-b29a-0cc3879b753f/volumes/kubernetes.io~projected/kube-api-access-v6sr4:{mountpoint:/var/lib/kubelet/pods/17adbc1a-f29c-4278-b29a-0cc3879b753f/volumes/kubernetes.io~projected/kube-api-access-v6sr4 major:0 minor:992 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17adbc1a-f29c-4278-b29a-0cc3879b753f/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/17adbc1a-f29c-4278-b29a-0cc3879b753f/volumes/kubernetes.io~secret/proxy-tls major:0 minor:988 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ad580a2-7f58-4d66-adad-0a53d9777655/volumes/kubernetes.io~projected/kube-api-access-cw64j:{mountpoint:/var/lib/kubelet/pods/1ad580a2-7f58-4d66-adad-0a53d9777655/volumes/kubernetes.io~projected/kube-api-access-cw64j major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ad580a2-7f58-4d66-adad-0a53d9777655/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1ad580a2-7f58-4d66-adad-0a53d9777655/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ad93612-ab12-4b30-984f-119e1b924a84/volumes/kubernetes.io~projected/kube-api-access-xzldt:{mountpoint:/var/lib/kubelet/pods/1ad93612-ab12-4b30-984f-119e1b924a84/volumes/kubernetes.io~projected/kube-api-access-xzldt major:0 minor:458 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~projected/kube-api-access-4mlkj:{mountpoint:/var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~projected/kube-api-access-4mlkj major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~secret/etcd-client major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volumes/kubernetes.io~projected/kube-api-access-wnvfd:{mountpoint:/var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volumes/kubernetes.io~projected/kube-api-access-wnvfd major:0 minor:164 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:129 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/234a5a6c-3790-49d0-b1e7-86f81048d96a/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/234a5a6c-3790-49d0-b1e7-86f81048d96a/volumes/kubernetes.io~projected/ca-certs major:0 minor:453 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/234a5a6c-3790-49d0-b1e7-86f81048d96a/volumes/kubernetes.io~projected/kube-api-access-pp5xj:{mountpoint:/var/lib/kubelet/pods/234a5a6c-3790-49d0-b1e7-86f81048d96a/volumes/kubernetes.io~projected/kube-api-access-pp5xj major:0 minor:456 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/234a5a6c-3790-49d0-b1e7-86f81048d96a/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/234a5a6c-3790-49d0-b1e7-86f81048d96a/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:454 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2385db6b-4286-4839-822c-aa9c52290172/volumes/kubernetes.io~projected/kube-api-access-d27hr:{mountpoint:/var/lib/kubelet/pods/2385db6b-4286-4839-822c-aa9c52290172/volumes/kubernetes.io~projected/kube-api-access-d27hr major:0 minor:847 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2385db6b-4286-4839-822c-aa9c52290172/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/2385db6b-4286-4839-822c-aa9c52290172/volumes/kubernetes.io~secret/proxy-tls major:0 minor:822 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2cad2401-dab1-49f7-870e-a742ebfe323f/volumes/kubernetes.io~projected/kube-api-access-rv9m7:{mountpoint:/var/lib/kubelet/pods/2cad2401-dab1-49f7-870e-a742ebfe323f/volumes/kubernetes.io~projected/kube-api-access-rv9m7 major:0 minor:317 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2e0fa133-60e7-47d0-996e-7e85aef2a218/volumes/kubernetes.io~projected/kube-api-access-7rccw:{mountpoint:/var/lib/kubelet/pods/2e0fa133-60e7-47d0-996e-7e85aef2a218/volumes/kubernetes.io~projected/kube-api-access-7rccw major:0 minor:96 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/317a89ea-e9dd-4167-8568-bb36e2431015/volumes/kubernetes.io~projected/kube-api-access-nllws:{mountpoint:/var/lib/kubelet/pods/317a89ea-e9dd-4167-8568-bb36e2431015/volumes/kubernetes.io~projected/kube-api-access-nllws major:0 minor:95 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/330df925-8429-4b96-9bfe-caa017c21afa/volumes/kubernetes.io~projected/kube-api-access-2sqzx:{mountpoint:/var/lib/kubelet/pods/330df925-8429-4b96-9bfe-caa017c21afa/volumes/kubernetes.io~projected/kube-api-access-2sqzx major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/330df925-8429-4b96-9bfe-caa017c21afa/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/330df925-8429-4b96-9bfe-caa017c21afa/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:442 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/35925474-e3fe-4cff-aad6-d853816618c7/volumes/kubernetes.io~projected/kube-api-access-dzblt:{mountpoint:/var/lib/kubelet/pods/35925474-e3fe-4cff-aad6-d853816618c7/volumes/kubernetes.io~projected/kube-api-access-dzblt major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/35925474-e3fe-4cff-aad6-d853816618c7/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/35925474-e3fe-4cff-aad6-d853816618c7/volumes/kubernetes.io~secret/srv-cert major:0 minor:411 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/35d8f08f-4c57-44e0-8e8f-3969287e2a5a/volumes/kubernetes.io~projected/kube-api-access-q6d7j:{mountpoint:/var/lib/kubelet/pods/35d8f08f-4c57-44e0-8e8f-3969287e2a5a/volumes/kubernetes.io~projected/kube-api-access-q6d7j major:0 minor:83 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/369e9689-e2f6-4276-b096-8db094f8d6ae/volumes/kubernetes.io~projected/kube-api-access-crbvx:{mountpoint:/var/lib/kubelet/pods/369e9689-e2f6-4276-b096-8db094f8d6ae/volumes/kubernetes.io~projected/kube-api-access-crbvx major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/369e9689-e2f6-4276-b096-8db094f8d6ae/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/369e9689-e2f6-4276-b096-8db094f8d6ae/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:378 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/369e9689-e2f6-4276-b096-8db094f8d6ae/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/369e9689-e2f6-4276-b096-8db094f8d6ae/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:375 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/36db10b8-33a2-4b54-85e2-9809eb6bc37d/volumes/kubernetes.io~projected/kube-api-access-bkdqs:{mountpoint:/var/lib/kubelet/pods/36db10b8-33a2-4b54-85e2-9809eb6bc37d/volumes/kubernetes.io~projected/kube-api-access-bkdqs major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/36db10b8-33a2-4b54-85e2-9809eb6bc37d/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/36db10b8-33a2-4b54-85e2-9809eb6bc37d/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:455 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/375d5112-d2be-47cf-bee1-82614ba71ed8/volumes/kubernetes.io~projected/kube-api-access-d4dcj:{mountpoint:/var/lib/kubelet/pods/375d5112-d2be-47cf-bee1-82614ba71ed8/volumes/kubernetes.io~projected/kube-api-access-d4dcj major:0 minor:781 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/375d5112-d2be-47cf-bee1-82614ba71ed8/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/375d5112-d2be-47cf-bee1-82614ba71ed8/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:778 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/375d5112-d2be-47cf-bee1-82614ba71ed8/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/375d5112-d2be-47cf-bee1-82614ba71ed8/volumes/kubernetes.io~secret/webhook-cert major:0 minor:780 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a039fc2-b0af-4b2c-a884-1c274c08064d/volumes/kubernetes.io~projected/kube-api-access-pmmhd:{mountpoint:/var/lib/kubelet/pods/3a039fc2-b0af-4b2c-a884-1c274c08064d/volumes/kubernetes.io~projected/kube-api-access-pmmhd major:0 minor:351 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a039fc2-b0af-4b2c-a884-1c274c08064d/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/3a039fc2-b0af-4b2c-a884-1c274c08064d/volumes/kubernetes.io~secret/signing-key major:0 minor:347 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3c0d0048-6d96-459c-8742-2f092af44a6a/volumes/kubernetes.io~projected/kube-api-access-2s9rk:{mountpoint:/var/lib/kubelet/pods/3c0d0048-6d96-459c-8742-2f092af44a6a/volumes/kubernetes.io~projected/kube-api-access-2s9rk major:0 minor:1079 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3c0d0048-6d96-459c-8742-2f092af44a6a/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/3c0d0048-6d96-459c-8742-2f092af44a6a/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1075 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3c0d0048-6d96-459c-8742-2f092af44a6a/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/3c0d0048-6d96-459c-8742-2f092af44a6a/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1070 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971/volumes/kubernetes.io~projected/kube-api-access-qwfnk:{mountpoint:/var/lib/kubelet/pods/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971/volumes/kubernetes.io~projected/kube-api-access-qwfnk major:0 minor:271 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4086d06f-d50e-4632-9da7-508909429eef/volumes/kubernetes.io~projected/kube-api-access-w4lx2:{mountpoint:/var/lib/kubelet/pods/4086d06f-d50e-4632-9da7-508909429eef/volumes/kubernetes.io~projected/kube-api-access-w4lx2 major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4671673d-afa0-481f-b3a2-2c2b9441b6ce/volumes/kubernetes.io~projected/kube-api-access-d7jz6:{mountpoint:/var/lib/kubelet/pods/4671673d-afa0-481f-b3a2-2c2b9441b6ce/volumes/kubernetes.io~projected/kube-api-access-d7jz6 major:0 minor:596 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4671673d-afa0-481f-b3a2-2c2b9441b6ce/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/4671673d-afa0-481f-b3a2-2c2b9441b6ce/volumes/kubernetes.io~secret/metrics-tls major:0 minor:643 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/46ae7b31-c91c-477e-a04a-a3a8541747be/volumes/kubernetes.io~projected/kube-api-access-zwsns:{mountpoint:/var/lib/kubelet/pods/46ae7b31-c91c-477e-a04a-a3a8541747be/volumes/kubernetes.io~projected/kube-api-access-zwsns major:0 minor:114 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/47f82c03-65d1-4a6c-ba09-8a00ae778009/volumes/kubernetes.io~projected/kube-api-access-ghzrb:{mountpoint:/var/lib/kubelet/pods/47f82c03-65d1-4a6c-ba09-8a00ae778009/volumes/kubernetes.io~projected/kube-api-access-ghzrb major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/47f82c03-65d1-4a6c-ba09-8a00ae778009/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/47f82c03-65d1-4a6c-ba09-8a00ae778009/volumes/kubernetes.io~secret/srv-cert major:0 minor:364 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4bc77989-ecfc-4500-92a0-18c2b3b78408/volumes/kubernetes.io~projected/kube-api-access-brvlj:{mountpoint:/var/lib/kubelet/pods/4bc77989-ecfc-4500-92a0-18c2b3b78408/volumes/kubernetes.io~projected/kube-api-access-brvlj major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4bc77989-ecfc-4500-92a0-18c2b3b78408/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/4bc77989-ecfc-4500-92a0-18c2b3b78408/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a715e53-1874-4993-93d1-504c3470a6f5/volumes/kubernetes.io~projected/kube-api-access-99mks:{mountpoint:/var/lib/kubelet/pods/5a715e53-1874-4993-93d1-504c3470a6f5/volumes/kubernetes.io~projected/kube-api-access-99mks major:0 minor:1048 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a715e53-1874-4993-93d1-504c3470a6f5/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/5a715e53-1874-4993-93d1-504c3470a6f5/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1036 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a715e53-1874-4993-93d1-504c3470a6f5/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/5a715e53-1874-4993-93d1-504c3470a6f5/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1037 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5bccf60c-5b07-4f40-8430-12bfb62661c7/volumes/kubernetes.io~projected/kube-api-access-4b6rn:{mountpoint:/var/lib/kubelet/pods/5bccf60c-5b07-4f40-8430-12bfb62661c7/volumes/kubernetes.io~projected/kube-api-access-4b6rn major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5e691486-8540-4b79-8eed-b0fb829071db/volumes/kubernetes.io~projected/kube-api-access-lpl28:{mountpoint:/var/lib/kubelet/pods/5e691486-8540-4b79-8eed-b0fb829071db/volumes/kubernetes.io~projected/kube-api-access-lpl28 major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5e691486-8540-4b79-8eed-b0fb829071db/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/5e691486-8540-4b79-8eed-b0fb829071db/volumes/kubernetes.io~secret/metrics-certs major:0 minor:468 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65cfa12a-0711-4fba-8859-73a3f8f250a9/volumes/kubernetes.io~projected/kube-api-access-xhzcj:{mountpoint:/var/lib/kubelet/pods/65cfa12a-0711-4fba-8859-73a3f8f250a9/volumes/kubernetes.io~projected/kube-api-access-xhzcj major:0 minor:689 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65cfa12a-0711-4fba-8859-73a3f8f250a9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/65cfa12a-0711-4fba-8859-73a3f8f250a9/volumes/kubernetes.io~secret/serving-cert major:0 minor:688 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ed4f640-d481-4e7a-92eb-f0eda82e138c/volumes/kubernetes.io~projected/kube-api-access-xhmmv:{mountpoint:/var/lib/kubelet/pods/6ed4f640-d481-4e7a-92eb-f0eda82e138c/volumes/kubernetes.io~projected/kube-api-access-xhmmv major:0 minor:1080 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ed4f640-d481-4e7a-92eb-f0eda82e138c/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/6ed4f640-d481-4e7a-92eb-f0eda82e138c/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1077 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ed4f640-d481-4e7a-92eb-f0eda82e138c/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/6ed4f640-d481-4e7a-92eb-f0eda82e138c/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1074 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/734f9f10-5bde-44d5-a831-021b93fd667d/volumes/kubernetes.io~projected/kube-api-access-mq596:{mountpoint:/var/lib/kubelet/pods/734f9f10-5bde-44d5-a831-021b93fd667d/volumes/kubernetes.io~projected/kube-api-access-mq596 major:0 minor:860 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/734f9f10-5bde-44d5-a831-021b93fd667d/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/734f9f10-5bde-44d5-a831-021b93fd667d/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:859 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4/volumes/kubernetes.io~projected/kube-api-access-h8v5n:{mountpoint:/var/lib/kubelet/pods/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4/volumes/kubernetes.io~projected/kube-api-access-h8v5n major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:382 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7a951627-c032-4846-821c-c4bcbf4a91b9/volumes/kubernetes.io~projected/kube-api-access-wxn4v:{mountpoint:/var/lib/kubelet/pods/7a951627-c032-4846-821c-c4bcbf4a91b9/volumes/kubernetes.io~projected/kube-api-access-wxn4v major:0 minor:836 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7a951627-c032-4846-821c-c4bcbf4a91b9/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/7a951627-c032-4846-821c-c4bcbf4a91b9/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:830 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e309570-09d0-412a-a74b-c5397d048a30/volumes/kubernetes.io~projected/kube-api-access-mcfq7:{mountpoint:/var/lib/kubelet/pods/7e309570-09d0-412a-a74b-c5397d048a30/volumes/kubernetes.io~projected/kube-api-access-mcfq7 major:0 minor:835 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e309570-09d0-412a-a74b-c5397d048a30/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/7e309570-09d0-412a-a74b-c5397d048a30/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:828 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7fa6920b-f7d9-4758-bba9-356a2c8b1b67/volumes/kubernetes.io~projected/kube-api-access-w9jhr:{mountpoint:/var/lib/kubelet/pods/7fa6920b-f7d9-4758-bba9-356a2c8b1b67/volumes/kubernetes.io~projected/kube-api-access-w9jhr major:0 minor:833 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7fa6920b-f7d9-4758-bba9-356a2c8b1b67/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/7fa6920b-f7d9-4758-bba9-356a2c8b1b67/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:829 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/83a4f641-d28f-42aa-a228-f6086d720fe4/volumes/kubernetes.io~projected/kube-api-access-9hb2q:{mountpoint:/var/lib/kubelet/pods/83a4f641-d28f-42aa-a228-f6086d720fe4/volumes/kubernetes.io~projected/kube-api-access-9hb2q major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/83a4f641-d28f-42aa-a228-f6086d720fe4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/83a4f641-d28f-42aa-a228-f6086d720fe4/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a0944d2-d99a-42eb-81f5-a212b750b8f4/volumes/kubernetes.io~projected/kube-api-access-882b8:{mountpoint:/var/lib/kubelet/pods/8a0944d2-d99a-42eb-81f5-a212b750b8f4/volumes/kubernetes.io~projected/kube-api-access-882b8 major:0 minor:94 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a0944d2-d99a-42eb-81f5-a212b750b8f4/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/8a0944d2-d99a-42eb-81f5-a212b750b8f4/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ce8e99d-7b02-4bf4-a438-adde851918cb/volumes/kubernetes.io~projected/kube-api-access-r8dfw:{mountpoint:/var/lib/kubelet/pods/8ce8e99d-7b02-4bf4-a438-adde851918cb/volumes/kubernetes.io~projected/kube-api-access-r8dfw major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ce8e99d-7b02-4bf4-a438-adde851918cb/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8ce8e99d-7b02-4bf4-a438-adde851918cb/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f59a12b-d690-44c5-972c-fb4b0b5819f1/volumes/kubernetes.io~projected/kube-api-access-8kpz5:{mountpoint:/var/lib/kubelet/pods/8f59a12b-d690-44c5-972c-fb4b0b5819f1/volumes/kubernetes.io~projected/kube-api-access-8kpz5 major:0 minor:476 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/92e396cd-a0d9-4b6b-9d82-add1ce2a8712/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/92e396cd-a0d9-4b6b-9d82-add1ce2a8712/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1009 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/933a37fd-d76a-4f60-8dd8-301fb73daf42/volumes/kubernetes.io~projected/kube-api-access-5w454:{mountpoint:/var/lib/kubelet/pods/933a37fd-d76a-4f60-8dd8-301fb73daf42/volumes/kubernetes.io~projected/kube-api-access-5w454 major:0 minor:732 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/933a37fd-d76a-4f60-8dd8-301fb73daf42/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/933a37fd-d76a-4f60-8dd8-301fb73daf42/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:731 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/93ea3c78-dede-468f-89a5-551133f794c5/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/93ea3c78-dede-468f-89a5-551133f794c5/volumes/kubernetes.io~projected/kube-api-access major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/93ea3c78-dede-468f-89a5-551133f794c5/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/93ea3c78-dede-468f-89a5-551133f794c5/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a/volumes/kubernetes.io~projected/kube-api-access-7b29z:{mountpoint:/var/lib/kubelet/pods/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a/volumes/kubernetes.io~projected/kube-api-access-7b29z major:0 minor:443 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a/volumes/kubernetes.io~secret/encryption-config major:0 minor:439 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a/volumes/kubernetes.io~secret/etcd-client major:0 minor:438 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9a1019b1-2b2 Mar 18 13:23:42.639390 master-0 kubenswrapper[28504]: d-4d63-bd2b-8c45bb85c90a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a/volumes/kubernetes.io~secret/serving-cert major:0 minor:440 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ca94153-9d1a-4b0a-a3eb-556e85f2e875/volumes/kubernetes.io~projected/kube-api-access-hbksj:{mountpoint:/var/lib/kubelet/pods/9ca94153-9d1a-4b0a-a3eb-556e85f2e875/volumes/kubernetes.io~projected/kube-api-access-hbksj major:0 minor:383 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a01c92f5-7938-437d-8262-11598bd8023c/volumes/kubernetes.io~projected/kube-api-access-qc69w:{mountpoint:/var/lib/kubelet/pods/a01c92f5-7938-437d-8262-11598bd8023c/volumes/kubernetes.io~projected/kube-api-access-qc69w major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a01c92f5-7938-437d-8262-11598bd8023c/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/a01c92f5-7938-437d-8262-11598bd8023c/volumes/kubernetes.io~secret/cert major:0 minor:374 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a01c92f5-7938-437d-8262-11598bd8023c/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/a01c92f5-7938-437d-8262-11598bd8023c/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:381 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5a93d05-3c8e-4666-9a55-d8f9e902db78/volumes/kubernetes.io~projected/kube-api-access-mthwt:{mountpoint:/var/lib/kubelet/pods/a5a93d05-3c8e-4666-9a55-d8f9e902db78/volumes/kubernetes.io~projected/kube-api-access-mthwt major:0 minor:774 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5a93d05-3c8e-4666-9a55-d8f9e902db78/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a5a93d05-3c8e-4666-9a55-d8f9e902db78/volumes/kubernetes.io~secret/serving-cert major:0 minor:737 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf/volumes/kubernetes.io~projected/kube-api-access-w6bfw:{mountpoint:/var/lib/kubelet/pods/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf/volumes/kubernetes.io~projected/kube-api-access-w6bfw major:0 minor:1012 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf/volumes/kubernetes.io~secret/default-certificate major:0 minor:1010 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1005 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf/volumes/kubernetes.io~secret/stats-auth major:0 minor:1011 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b41c9132-92ef-429d-bdd5-9bdb024e04fc/volumes/kubernetes.io~projected/kube-api-access-wlbm6:{mountpoint:/var/lib/kubelet/pods/b41c9132-92ef-429d-bdd5-9bdb024e04fc/volumes/kubernetes.io~projected/kube-api-access-wlbm6 major:0 minor:477 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b41c9132-92ef-429d-bdd5-9bdb024e04fc/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/b41c9132-92ef-429d-bdd5-9bdb024e04fc/volumes/kubernetes.io~secret/encryption-config major:0 minor:475 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b41c9132-92ef-429d-bdd5-9bdb024e04fc/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/b41c9132-92ef-429d-bdd5-9bdb024e04fc/volumes/kubernetes.io~secret/etcd-client major:0 minor:474 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b41c9132-92ef-429d-bdd5-9bdb024e04fc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b41c9132-92ef-429d-bdd5-9bdb024e04fc/volumes/kubernetes.io~secret/serving-cert major:0 minor:473 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b79758b7-9129-496c-abec-80d455648454/volumes/kubernetes.io~projected/kube-api-access-lpdw6:{mountpoint:/var/lib/kubelet/pods/b79758b7-9129-496c-abec-80d455648454/volumes/kubernetes.io~projected/kube-api-access-lpdw6 major:0 minor:1129 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b79758b7-9129-496c-abec-80d455648454/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/b79758b7-9129-496c-abec-80d455648454/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1128 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b79758b7-9129-496c-abec-80d455648454/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/b79758b7-9129-496c-abec-80d455648454/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b79758b7-9129-496c-abec-80d455648454/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/b79758b7-9129-496c-abec-80d455648454/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b856d226-a137-4954-82c5-5929d579387a/volumes/kubernetes.io~projected/kube-api-access-n2msq:{mountpoint:/var/lib/kubelet/pods/b856d226-a137-4954-82c5-5929d579387a/volumes/kubernetes.io~projected/kube-api-access-n2msq major:0 minor:1081 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b856d226-a137-4954-82c5-5929d579387a/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/b856d226-a137-4954-82c5-5929d579387a/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1076 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b856d226-a137-4954-82c5-5929d579387a/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/b856d226-a137-4954-82c5-5929d579387a/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1078 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/baeb6380-95e4-4e10-9798-e1e22f20bade/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/baeb6380-95e4-4e10-9798-e1e22f20bade/volumes/kubernetes.io~projected/ca-certs major:0 minor:465 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/baeb6380-95e4-4e10-9798-e1e22f20bade/volumes/kubernetes.io~projected/kube-api-access-xlm4c:{mountpoint:/var/lib/kubelet/pods/baeb6380-95e4-4e10-9798-e1e22f20bade/volumes/kubernetes.io~projected/kube-api-access-xlm4c major:0 minor:461 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bc9af4af-fb39-4a51-83ae-dab3f1d159f2/volumes/kubernetes.io~projected/kube-api-access-twczm:{mountpoint:/var/lib/kubelet/pods/bc9af4af-fb39-4a51-83ae-dab3f1d159f2/volumes/kubernetes.io~projected/kube-api-access-twczm major:0 minor:1172 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bc9af4af-fb39-4a51-83ae-dab3f1d159f2/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/bc9af4af-fb39-4a51-83ae-dab3f1d159f2/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1168 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd033b5b-af07-4e69-9a5c-46f7c9bde95a/volumes/kubernetes.io~projected/kube-api-access-5475b:{mountpoint:/var/lib/kubelet/pods/bd033b5b-af07-4e69-9a5c-46f7c9bde95a/volumes/kubernetes.io~projected/kube-api-access-5475b major:0 minor:838 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd033b5b-af07-4e69-9a5c-46f7c9bde95a/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/bd033b5b-af07-4e69-9a5c-46f7c9bde95a/volumes/kubernetes.io~secret/cert major:0 minor:837 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c074751c-6b3c-44df-aca5-42fa69662779/volumes/kubernetes.io~projected/kube-api-access-bbztv:{mountpoint:/var/lib/kubelet/pods/c074751c-6b3c-44df-aca5-42fa69662779/volumes/kubernetes.io~projected/kube-api-access-bbztv major:0 minor:846 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c074751c-6b3c-44df-aca5-42fa69662779/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c074751c-6b3c-44df-aca5-42fa69662779/volumes/kubernetes.io~secret/serving-cert major:0 minor:795 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2c4572e-0b38-4db1-96e5-6a35e29048e7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/c2c4572e-0b38-4db1-96e5-6a35e29048e7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2c4572e-0b38-4db1-96e5-6a35e29048e7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c2c4572e-0b38-4db1-96e5-6a35e29048e7/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c9a9baa5-9334-47dc-8d0c-eafc96a679b3/volumes/kubernetes.io~projected/kube-api-access-z9tzl:{mountpoint:/var/lib/kubelet/pods/c9a9baa5-9334-47dc-8d0c-eafc96a679b3/volumes/kubernetes.io~projected/kube-api-access-z9tzl major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c9a9baa5-9334-47dc-8d0c-eafc96a679b3/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c9a9baa5-9334-47dc-8d0c-eafc96a679b3/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cb471665-2b07-48df-9881-3fb663390b23/volumes/kubernetes.io~projected/kube-api-access-6f8xk:{mountpoint:/var/lib/kubelet/pods/cb471665-2b07-48df-9881-3fb663390b23/volumes/kubernetes.io~projected/kube-api-access-6f8xk major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cb471665-2b07-48df-9881-3fb663390b23/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/cb471665-2b07-48df-9881-3fb663390b23/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d2cf9274-25d2-4576-bbef-1d416dfff0a9/volumes/kubernetes.io~projected/kube-api-access-vljm6:{mountpoint:/var/lib/kubelet/pods/d2cf9274-25d2-4576-bbef-1d416dfff0a9/volumes/kubernetes.io~projected/kube-api-access-vljm6 major:0 minor:759 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a/volumes/kubernetes.io~projected/kube-api-access-5dvd5:{mountpoint:/var/lib/kubelet/pods/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a/volumes/kubernetes.io~projected/kube-api-access-5dvd5 major:0 minor:845 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:805 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/volumes/kubernetes.io~projected/kube-api-access-4djxt:{mountpoint:/var/lib/kubelet/pods/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/volumes/kubernetes.io~projected/kube-api-access-4djxt major:0 minor:971 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:968 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da6a763d-2777-40c4-ae1f-c77ced406ea2/volumes/kubernetes.io~projected/kube-api-access-lhqk9:{mountpoint:/var/lib/kubelet/pods/da6a763d-2777-40c4-ae1f-c77ced406ea2/volumes/kubernetes.io~projected/kube-api-access-lhqk9 major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da6a763d-2777-40c4-ae1f-c77ced406ea2/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/da6a763d-2777-40c4-ae1f-c77ced406ea2/volumes/kubernetes.io~secret/metrics-tls major:0 minor:377 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41/volumes/kubernetes.io~projected/kube-api-access major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e4d0b174-33e4-46ee-863b-b5cc2a271b85/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/e4d0b174-33e4-46ee-863b-b5cc2a271b85/volumes/kubernetes.io~projected/kube-api-access major:0 minor:380 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e4d0b174-33e4-46ee-863b-b5cc2a271b85/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e4d0b174-33e4-46ee-863b-b5cc2a271b85/volumes/kubernetes.io~secret/serving-cert major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eb8907fd-35dd-452a-8032-f2f95a6e553a/volumes/kubernetes.io~projected/kube-api-access-k254v:{mountpoint:/var/lib/kubelet/pods/eb8907fd-35dd-452a-8032-f2f95a6e553a/volumes/kubernetes.io~projected/kube-api-access-k254v major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eb8907fd-35dd-452a-8032-f2f95a6e553a/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/eb8907fd-35dd-452a-8032-f2f95a6e553a/volumes/kubernetes.io~secret/webhook-cert major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ebe459df-4be3-4a73-a061-5d2c637f57be/volumes/kubernetes.io~projected/kube-api-access-fqxgz:{mountpoint:/var/lib/kubelet/pods/ebe459df-4be3-4a73-a061-5d2c637f57be/volumes/kubernetes.io~projected/kube-api-access-fqxgz major:0 minor:1013 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee1eb80b-5a76-443f-a534-54d5bdc0c98a/volumes/kubernetes.io~projected/kube-api-access-qvxs4:{mountpoint:/var/lib/kubelet/pods/ee1eb80b-5a76-443f-a534-54d5bdc0c98a/volumes/kubernetes.io~projected/kube-api-access-qvxs4 major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee1eb80b-5a76-443f-a534-54d5bdc0c98a/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/ee1eb80b-5a76-443f-a534-54d5bdc0c98a/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:469 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2b92a53-0b61-4e1d-a306-f9a498e48b38/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/f2b92a53-0b61-4e1d-a306-f9a498e48b38/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2b92a53-0b61-4e1d-a306-f9a498e48b38/volumes/kubernetes.io~projected/kube-api-access-j5mgr:{mountpoint:/var/lib/kubelet/pods/f2b92a53-0b61-4e1d-a306-f9a498e48b38/volumes/kubernetes.io~projected/kube-api-access-j5mgr major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2b92a53-0b61-4e1d-a306-f9a498e48b38/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/f2b92a53-0b61-4e1d-a306-f9a498e48b38/volumes/kubernetes.io~secret/metrics-tls major:0 minor:379 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3c106be-27ea-4849-b365-eff6d25f5e71/volumes/kubernetes.io~projected/kube-api-access-hthf8:{mountpoint:/var/lib/kubelet/pods/f3c106be-27ea-4849-b365-eff6d25f5e71/volumes/kubernetes.io~projected/kube-api-access-hthf8 major:0 minor:901 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3c106be-27ea-4849-b365-eff6d25f5e71/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/f3c106be-27ea-4849-b365-eff6d25f5e71/volumes/kubernetes.io~secret/proxy-tls major:0 minor:900 fsType:tmpfs blockSize:0} overlay_0-101:{mountpoint:/var/lib/containers/storage/overlay/3423d0369db68916018e7a90bfb647c23e66e99bc6963c3f17354dd44adb5421/merged major:0 minor:101 fsType:overlay blockSize:0} overlay_0-1018:{mountpoint:/var/lib/containers/storage/overlay/cfeedaee03cc8f577c43f7f9af2ff6d87d8349068b84ab3f0932d40fe8918289/merged major:0 minor:1018 fsType:overlay blockSize:0} overlay_0-1022:{mountpoint:/var/lib/containers/storage/overlay/a6af0525c30b5b62359cfd878c2b892ee9eff3ef1af363d48d94884adb818949/merged major:0 minor:1022 fsType:overlay blockSize:0} overlay_0-1024:{mountpoint:/var/lib/containers/storage/overlay/8c863cfbcdf1bfba62f1246b5ebb06520b834d4bb35a3664ebf2994675d3dd1c/merged major:0 minor:1024 fsType:overlay blockSize:0} overlay_0-1026:{mountpoint:/var/lib/containers/storage/overlay/d0a4f210da177efad2568a50bd1fb16ccb376f2cfa53002ceaf2594cdac5c11e/merged major:0 minor:1026 fsType:overlay blockSize:0} overlay_0-1028:{mountpoint:/var/lib/containers/storage/overlay/45d49d7884405b6b81c99327ec8ebb5f8849506f1e6efd0ca8ec211808989783/merged major:0 minor:1028 fsType:overlay blockSize:0} overlay_0-103:{mountpoint:/var/lib/containers/storage/overlay/1ca71bb35b93561fdd850d154d00f44091fffa2c78deea100104aec8292be872/merged major:0 minor:103 fsType:overlay blockSize:0} overlay_0-1053:{mountpoint:/var/lib/containers/storage/overlay/cee474de7362f72f0e3edf983a6a7a2c493dfbee69884785cf973784105ecc98/merged major:0 minor:1053 fsType:overlay blockSize:0} overlay_0-1055:{mountpoint:/var/lib/containers/storage/overlay/13ffa81a4038d94bd218a1378d087f0fc548ae69b6ddec16b8c7929074f5f388/merged major:0 minor:1055 fsType:overlay blockSize:0} overlay_0-1057:{mountpoint:/var/lib/containers/storage/overlay/30c488a335abd1d74601b97380d8dbe3f917a439cbcd948eaebe8bd0974b6818/merged major:0 minor:1057 fsType:overlay blockSize:0} overlay_0-1062:{mountpoint:/var/lib/containers/storage/overlay/df3adcc1a65aa5e03f0e9aa94541de6b44b0df4a019e8a1a7d5cb4ef59ac03ab/merged major:0 minor:1062 fsType:overlay blockSize:0} overlay_0-1064:{mountpoint:/var/lib/containers/storage/overlay/a7696bbaed3a6fbf6bdb02e78ee729ea1de72efd54bc836222504b4a3b9c7dd9/merged major:0 minor:1064 fsType:overlay blockSize:0} overlay_0-1086:{mountpoint:/var/lib/containers/storage/overlay/3d7d42730aadc5eeaa2dd0b801379ac0f176152700d67c545424a480ed9ef4bb/merged major:0 minor:1086 fsType:overlay blockSize:0} overlay_0-109:{mountpoint:/var/lib/containers/storage/overlay/bbf6907f3085a98ae7f7b5eb18e69f82792e5cddaa486a70ce0af0576519e5f1/merged major:0 minor:109 fsType:overlay blockSize:0} overlay_0-1090:{mountpoint:/var/lib/containers/storage/overlay/e38edf65dfad48ccab238a28455248626c6a9eedb034b6e0b7f6f60e6d4d17e8/merged major:0 minor:1090 fsType:overlay blockSize:0} overlay_0-1092:{mountpoint:/var/lib/containers/storage/overlay/60fed3f436a81b0116746fa2022438ac3718d4d26e067e8e7ac9860fa25a364d/merged major:0 minor:1092 fsType:overlay blockSize:0} overlay_0-1094:{mountpoint:/var/lib/containers/storage/overlay/09e163a3c28df37461e48c34eeebcd83722cf736273ff119b9b865878548180c/merged major:0 minor:1094 fsType:overlay blockSize:0} overlay_0-1095:{mountpoint:/var/lib/containers/storage/overlay/af655c731c2526dcd139c2a0904d46412ea7016be412b085e96cca11e3314940/merged major:0 minor:1095 fsType:overlay blockSize:0} overlay_0-1102:{mountpoint:/var/lib/containers/storage/overlay/a6fb3a6498a004dd487418756d4b652b5cf0505d4fd30dd179b75ee62f36fac3/merged major:0 minor:1102 fsType:overlay blockSize:0} overlay_0-1107:{mountpoint:/var/lib/containers/storage/overlay/82e0f3169a328f5054397390ca2b1222e23f09251ea73d9d8c82b98f57932b8c/merged major:0 minor:1107 fsType:overlay blockSize:0} overlay_0-1109:{mountpoint:/var/lib/containers/storage/overlay/769259855d23df06d6e422c01882d4daf7cf91c01b4de0b9e7a9d540cfe6d572/merged major:0 minor:1109 fsType:overlay blockSize:0} overlay_0-1118:{mountpoint:/var/lib/containers/storage/overlay/5752664f415b24f41b22d9765a44bc31ae004791cdbe1ff4eeb20307c956f426/merged major:0 minor:1118 fsType:overlay blockSize:0} overlay_0-112:{mountpoint:/var/lib/containers/storage/overlay/3227c07591baa873d0cc7a5ed65f184dfc324766fe6520043e689e179408de9b/merged major:0 minor:112 fsType:overlay blockSize:0} overlay_0-1132:{mountpoint:/var/lib/containers/storage/overlay/ce27caefccf3adb1dcb7eb9e19b100737910763fd6cd34dd6cb0324f39a29157/merged major:0 minor:1132 fsType:overlay blockSize:0} overlay_0-1134:{mountpoint:/var/lib/containers/storage/overlay/eccc6ce1e1ea95551ce89f1fbdeccaeb365b74f9578d23147d5d8a687847eaeb/merged major:0 minor:1134 fsType:overlay blockSize:0} overlay_0-1136:{mountpoint:/var/lib/containers/storage/overlay/01d3b149be4522de91d31dcdf91f563ed62f87e4b40fe3da4c2923d347d10388/merged major:0 minor:1136 fsType:overlay blockSize:0} overlay_0-1141:{mountpoint:/var/lib/containers/storage/overlay/3394a3d07b58f1dfcdb2a8a9b8a05a873c958bc450133611ba23b4d01667493d/merged major:0 minor:1141 fsType:overlay blockSize:0} overlay_0-117:{mountpoint:/var/lib/containers/storage/overlay/3013797d391afb903ba508a0744a36b2375ed50964372d0a6adcdfd2b502eebd/merged major:0 minor:117 fsType:overlay blockSize:0} overlay_0-1175:{mountpoint:/var/lib/containers/storage/overlay/87bb1f1de44f4c8d0195856d67c2b68b68487854ff914cd1542932be6d1753c7/merged major:0 minor:1175 fsType:overlay blockSize:0} overlay_0-1177:{mountpoint:/var/lib/containers/storage/overlay/4dc7ae8d70285db08ba7385d8e9a94ab8b0353467b0dff63aabff3a0bbe6ebda/merged major:0 minor:1177 fsType:overlay blockSize:0} overlay_0-1179:{mountpoint:/var/lib/containers/storage/overlay/65191e675d86547f5a365d4ccd1299ce587e0525b65bc4d9fad130de87ad50c1/merged major:0 minor:1179 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/ab0ddbf8738a8c996629dfa02e3045d5dbbd0f2fe7e3033d1c1888e4d91fd318/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-124:{mountpoint:/var/lib/containers/storage/overlay/9f2cc4def61523aaf33fe56a40bec1b24565287c13681abcdab9538624dc0f62/merged major:0 minor:124 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/362ebcc3008b14449b9214bbcb594366c6de16e857328c1ec03aed820bc0d3dd/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/99aad264c440f3c311b281fa876cfd6437f890d3e6a63ad074cfafc1a9d61aad/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/61b80e8331b24d3d1d12307d3c2dcbfdab09c1f78fd249105db1958f96a5bcf3/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-147:{mountpoint:/var/lib/containers/storage/overlay/d1151958e9bcd57012f9df080cbcd4f154a92a698b67b450c6e6a661cc6b8165/merged major:0 minor:147 fsType:overlay blockSize:0} overlay_0-149:{mountpoint:/var/lib/containers/storage/overlay/f294675ac05c91263d18958c0db215c892c59f3c32ba7bc3162ec5e747a80ffc/merged major:0 minor:149 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/e3e3168e70147df41769069bee96791517b3d1daf53c0bc28fb8666f3fe160aa/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-160:{mountpoint:/var/lib/containers/storage/overlay/238f8ca0fb75cbd627a6a17c4e503e58c66329b02262f25198d6aeb9801a2eed/merged major:0 minor:160 fsType:overlay blockSize:0} overlay_0-165:{mountpoint:/var/lib/containers/storage/overlay/0d95347d1d3dbe40805de00ce46be897004d5706268ec8363da9f95712b1686c/merged major:0 minor:165 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/d1f14c8505eece3b4ec54813450ac2f9b79e19293a183c5465c050b31e419e01/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/a73aef02c5a14e2474c4b613568ba63cf44d11d3fb49d97d48f19082dab856f0/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/62271c62bf15244d70720519ff8af6db78a4f8e599fc1b525dad671e754641c9/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/1e454559105307544df6bf62cb87e50e6fb836862d7106ba551da41383b3806f/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/e9ad479b6e63f9b8ee129130e94fe6b16984acdbd0f6e1a218ce10a7c4040be8/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/37415ef378c2d4a07f2c4c6a96a8024ae0462bfc693247debea0d79612b5e58d/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/ebb15355309b34ffc9b37d1e605fe08e55bda4fc54851b16984a0b2d6045aeca/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/dae2ce68a255a8a0b4321d9796f9e4c2c788da7163c0b315c03d96d19f0855c6/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/b50d9fe50e007e0d368e36c60f55b1d333d687568bc5358f5f72f5477dd703cb/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-276:{mountpoint:/var/lib/containers/storage/overlay/bdc8bc7ecfa859111d21f5a7d2e21fa3955a2c5e9761c8d41f541c77980ceb6c/merged major:0 minor:276 fsType:overlay blockSize:0} overlay_0-278:{mountpoint:/var/lib/containers/storage/overlay/589e3e55bc1cc76b94a111da89c4d5fe02807d858306aadd45a34d9b91caa168/merged major:0 minor:278 fsType:overlay blockSize:0} overlay_0-280:{mountpoint:/var/lib/containers/storage/overlay/71b9d8dcd422093bd612fabdf8b8824cc0c13755a517d3d757847992a3582758/merged major:0 minor:280 fsType:overlay blockSize:0} overlay_0-282:{mountpoint:/var/lib/containers/storage/overlay/7d6dc61f5707e3c0cb7237c51e3ed5bb3bad665119afdf7897f82dd1b542b88a/merged major:0 minor:282 fsType:overlay blockSize:0} overlay_0-284:{mountpoint:/var/lib/containers/storage/overlay/959bc1ef2818b0e01b9b87596cd6f0c8ee90139a85c0a119f20c29364af4da62/merged major:0 minor:284 fsType:overlay blockSize:0} overlay_0-286:{mountpoint:/var/lib/containers/storage/overlay/44276f3c489655dcb7bb5779ac51f2b0dcabe74257413b49e99fce0e751fa841/merged major:0 minor:286 fsType:overlay blockSize:0} overlay_0-288:{mountpoint:/var/lib/containers/storage/overlay/b61203dfc553cb59f57c91bbf9a9caea8c8de0804ea7b92d935086c90d77cb12/merged major:0 minor:288 fsType:overlay blockSize:0} overlay_0-290:{mountpoint:/var/lib/containers/storage/overlay/c45218af49d7e41baec011cbb33667e98e84d370728601dd5b5f3f4cbee8edb7/merged major:0 minor:290 fsType:overlay blockSize:0} overlay_0-292:{mountpoint:/var/lib/containers/storage/overlay/aa4754b567bce24d0db62309e5a6ebdec6a331e02bf0f2dfa6a6675e09a8f50d/merged major:0 minor:292 fsType:overlay blockSize:0} overlay_0-294:{mountpoint:/var/lib/containers/storage/overlay/57963f10af35de1bd893eacca618cd775c06c7e37582d53acb7390c45be2a158/merged major:0 minor:294 fsType:overlay blockSize:0} overlay_0-296:{mountpoint:/var/lib/containers/storage/overlay/3c19c0808088204bf1a9afc8a14658ce432f194ce408a36623995c3acda01096/merged major:0 minor:296 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/6d96ad5f57032fb85bd12cd2dc3c682c9a8216e68c0668bcf577bdde0a866adc/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-300:{mountpoint:/var/lib/containers/storage/overlay/3bf2cebae1f296da2250f2c959d1d735d2ccb96c5becd36064ee428321e36cdf/merged major:0 minor:300 fsType:overlay blockSize:0} overlay_0-304:{mountpoint:/var/lib/containers/storage/overlay/f1ee6c537b44b112415b4f08279ec5c0a52c03e06efc71fe06e65227cbd9be36/merged major:0 minor:304 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/cefc3e224458b9e0426a7c82d5ece7625cf45a4bc7cd381501fdaff8333fa022/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-312:{mountpoint:/var/lib/containers/storage/overlay/56932ef97cc2897bd844816246aafd69e7297691cb10a618e8c5f5ecd3616b96/merged major:0 minor:312 fsType:overlay blockSize:0} overlay_0-314:{mountpoint:/var/lib/containers/storage/overlay/337dcbb94b92bc82da815320504b64e6471dc6f4c6132c6d2581c6d8d9d87495/merged major:0 minor:314 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/2d2618d976ba3c16159cde551c15c850959ad8ee0d0c1daa5a14e483ee5098bd/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-323:{mountpoint:/var/lib/containers/storage/overlay/bd694355e509c1a139cbb4de4360d88d395e5d1478ff6bd25531159faff533ae/merged major:0 minor:323 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/d390a8ac535c384db5ec540d52eb6f1fc4a2ba1e57dfb55de2f4ef796523af85/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-334:{mountpoint:/var/lib/containers/storage/overlay/486f45a781eb8d9164422ef27e849ffe24d160b95a16a58a515e498d26356b16/merged major:0 minor:334 fsType:overlay blockSize:0} overlay_0-336:{mountpoint:/var/lib/containers/storage/overlay/2b3ce64117dd3668a05fe7cb80f5d40a0aa708a84191e00a04eb1567e649f326/merged major:0 minor:336 fsType:overlay blockSize:0} overlay_0-338:{mountpoint:/var/lib/containers/storage/overlay/b13c09568a8a1dbbe0ba76b48decdf5677205bbb4de0f2e3107988959f1598d4/merged major:0 minor:338 fsType:overlay blockSize:0} overlay_0-340:{mountpoint:/var/lib/containers/storage/overlay/97cf8437fcf57c28990a1bd1cf00ac637c6040658017ff96f5056f343d3631c9/merged major:0 minor:340 fsType:overlay blockSize:0} overlay_0-342:{mountpoint:/var/lib/containers/storage/overlay/3f8f72e4bbf641198c24b3b733157dfc42d58c5f3e867aa09de1e07b664dff6f/merged major:0 minor:342 fsType:overlay blockSize:0} overlay_0-344:{mountpoint:/var/lib/containers/storage/overlay/9c8851e16b5e1f7c189ce95263e1cbf8c9e8321ad5c1768b1f8c10b410a752e4/merged major:0 minor:344 fsType:overlay blockSize:0} overlay_0-348:{mountpoint:/var/lib/containers/storage/overlay/5b512799477caa17d6ace0378782b10988972dd05d33802e81604a41c181ce14/merged major:0 minor:348 fsType:overlay blockSize:0} overlay_0-350:{mountpoint:/var/lib/containers/storage/overlay/a26e327f476bbf4c18d344cb486a6e60ff0ce49f38cc7382d60572e5e8775c8e/merged major:0 minor:350 fsType:overlay blockSize:0} overlay_0-355:{mountpoint:/var/lib/containers/storage/overlay/ad5d418599053dcb559194e421c852b41cd5f8baa1a8f5e44bcb1475fe27871d/merged major:0 minor:355 fsType:overlay blockSize:0} overlay_0-358:{mountpoint:/var/lib/containers/storage/overlay/323672c969ca94a2d5ac3e4d895425234a4d5f6b4bd05e53d0d5b7eaac41ca7d/merged major:0 minor:358 fsType:overlay blockSize:0} overlay_0-363:{mountpoint:/var/lib/containers/storage/overlay/a9bfe385737a2373dc827351d31ce8913cc045dcc02bc7cfdefa7b3c95b7f503/merged major:0 minor:363 fsType:overlay blockSize:0} overlay_0-367:{mountpoint:/var/lib/containers/storage/overlay/0ee066ce5a6e3c0c24bd8d183c42326e7891d4143a176dd1a2474bd14a0e61e3/merged major:0 minor:367 fsType:overlay blockSize:0} overlay_0-369:{mountpoint:/var/lib/containers/storage/overlay/f15df271d19cd38cf5a6b42c052e2499a62a93d6ed3666421c2a87d62b54f54d/merged major:0 minor:369 fsType:overlay blockSize:0} overlay_0-400:{mountpoint:/var/lib/containers/storage/overlay/853300a4443708f2c4a9e89a5c5c4f8eef7e72fd83789d6febe12dee745fb07b/merged major:0 minor:400 fsType:overlay blockSize:0} overlay_0-403:{mountpoint:/var/lib/containers/storage/overlay/632c8c4e13654b9ed47c1c70f18b60390c2be7150fa69466ecce87c3a3eb0623/merged major:0 minor:403 fsType:overlay blockSize:0} overlay_0-406:{mountpoint:/var/lib/containers/storage/overlay/2ddc2dcbb92ab6d8785f09a17952e51a7e9ea07bdf79dccf0fb134f8745fb7ac/merged major:0 minor:406 fsType:overlay blockSize:0} overlay_0-410:{mountpoint:/var/lib/containers/storage/overlay/db8ce9a0aadf10f6bdbd7e0931f17cc1e448fb5f24da4f85c7ce04000a3c36b4/merged major:0 minor:410 fsType:overlay blockSize:0} overlay_0-413:{mountpoint:/var/lib/containers/storage/overlay/295ca1aee1a625e8fd4f576d366fb618a0c100dd8f0a5aead6eebdc4e3282675/merged major:0 minor:413 fsType:overlay blockSize:0} overlay_0-416:{mountpoint:/var/lib/containers/storage/overlay/4b47ff27e209801e5114ee451f5e3f3f2b2820bf70e46a1101c0fbde45354aa0/merged major:0 minor:416 fsType:overlay blockSize:0} overlay_0-419:{mountpoint:/var/lib/containers/storage/overlay/4157bcd9b566a7716151f0ef311e77185152de8bd0336ef87157e0db318bc887/merged major:0 minor:419 fsType:overlay blockSize:0} overlay_0-42:{mountpoint:/var/lib/containers/storage/overlay/9b74bbd00751a1e11a3b1996e11ed94ed7499e3fd1615f247228b5637c1fda63/merged major:0 minor:42 fsType:overlay blockSize:0} overlay_0-421:{mountpoint:/var/lib/containers/storage/overlay/5495e46448f36a26d55093795b346a5f7e571d41a14b5ac5f7bb724789ef15e8/merged major:0 minor:421 fsType:overlay blockSize:0} overlay_0-423:{mountpoint:/var/lib/containers/storage/overlay/28b42c7b53061870d7a8361ae52fe833204553a76d0b35c1d42377339fd7e9dd/merged major:0 minor:423 fsType:overlay blockSize:0} overlay_0-425:{mountpoint:/var/lib/containers/storage/overlay/cfa4cc6bcb3c8df5de20d31a53f253174f9bd56451f592b23e22cf323df295dc/merged major:0 minor:425 fsType:overlay blockSize:0} overlay_0-427:{mountpoint:/var/lib/containers/storage/overlay/40c89a1cd1b1e5de8073dd874a7f04b180ed3a28e9c6c4928f40e50d5c0ede2d/merged major:0 minor:427 fsType:overlay blockSize:0} overlay_0-429:{mountpoint:/var/lib/containers/storage/overlay/a24f91f8ae01e5a863554f8ac76e34d3515ba51522348271ac679bbbed3320da/merged major:0 minor:429 fsType:overlay blockSize:0} overlay_0-449:{mountpoint:/var/lib/containers/storage/overlay/28215ba691688e9e045095f975b34162568a78c673e562b451aafcb45d43a012/merged major:0 minor:449 fsType:overlay blockSize:0} overlay_0-45:{mountpoint:/var/lib/containers/storage/overlay/62803e6baf6730ac34d0d6c660da0b9886f5049f45f52fe52bc1690e9da7622d/merged major:0 minor:45 fsType:overlay blockSize:0} overlay_0-451:{mountpoint:/var/lib/containers/storage/overlay/e3e65847eede7b00699db27fcd5836817338861bcdcf405a74efce3996c14d76/merged major:0 minor:451 fsType:overlay blockSize:0} overlay_0-460:{mountpoint:/var/lib/containers/storage/overlay/1b7a317da6b7f2a3bb3d9be39dd5678eacc4b84b2bf69023b5e4f3d19bc4e679/merged major:0 minor:460 fsType:overlay blockSize:0} overlay_0-462:{mountpoint:/var/lib/containers/storage/overlay/02acd36a7cf0592353895262ac403acc7aff4250d8b41c6422034a6531ff5cb1/merged major:0 minor:462 fsType:overlay blockSize:0} overlay_0-466:{mountpoint:/var/lib/containers/storage/overlay/169db2e10822bbb90534b03047a0f6c990b9aefb67166ed6d7fd0395c798da97/merged major:0 minor:466 fsType:overlay blockSize:0} overlay_0-470:{mountpoint:/var/lib/containers/storage/overlay/cb0df5fec85224a4cc89d74bbda8f80da746f741666b36f363b6b3e0eb7a2008/merged major:0 minor:470 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/4b820c4acfa9d77a1ca46da7dc23507a0f1091aa1f9f16c07caf49879251c3c0/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-480:{mountpoint:/var/lib/containers/storage/overlay/429df1a5a7d3dc2dd449c043e181cd82add63af910a2fb61dae852f60e5bab5d/merged major:0 minor:480 fsType:overlay blockSize:0} overlay_0-499:{mountpoint:/var/lib/containers/storage/overlay/25688fc59ab172e94f811841efeccce6ea1f3765cd72b845e3af2835bd0dd064/merged major:0 minor:499 fsType:overlay blockSize:0} overlay_0-500:{mountpoint:/var/lib/containers/storage/overlay/9b043bc904947bfb7263e5bd2875c28446fb1382d0a04bfb6e09015507b4c07d/merged major:0 minor:500 fsType:overlay blockSize:0} overlay_0-505:{mountpoint:/var/lib/containers/storage/overlay/51d6e08e95dca876de171e1f132f4277d56495450100223e9b5a93a05499e8cb/merged major:0 minor:505 fsType:overlay blockSize:0} overlay_0-506:{mountpoint:/var/lib/containers/storage/overlay/90bb6d1a321797e94a7c8a23c6537aa2ff458aa7c4451e6a95bd83f63ef64689/merged major:0 minor:506 fsType:overlay blockSize:0} overlay_0-508:{mountpoint:/var/lib/containers/storage/overlay/28e5e6cd37951b469bdf2ade1e03c295ce4d65456ff45b3032ecaf5db9253d52/merged major:0 minor:508 fsType:overlay blockSize:0} overlay_0-51:{mountpoint:/var/lib/containers/storage/overlay/44d1619f951b2bfc7659d53eae3280a469d277d4a31187902717d42dc2d5b31e/merged major:0 minor:51 fsType:overlay blockSize:0} overlay_0-511:{mountpoint:/var/lib/containers/storage/overlay/41e7eb24c4bbd3d70e4805da09d39426de2e21850c11e8cfd53bbc05ee20d85b/merged major:0 minor:511 fsType:overlay blockSize:0} overlay_0-512:{mountpoint:/var/lib/containers/storage/overlay/182c1eede119df852a4adfea4fbfa5ed17b22f6aa10fe904fa3a82eefc53b52c/merged major:0 minor:512 fsType:overlay blockSize:0} overlay_0-517:{mountpoint:/var/lib/containers/storage/overlay/f47f004d91463bb670a88570832349c145975320ea4b131bc452d4b813821b9b/merged major:0 minor:517 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/d5cbd06a2264f250a4ba86a3982f177f9c61cacdde2d1087eef80563f7b834e1/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-522:{mountpoint:/var/lib/containers/storage/overlay/d626ab3da573df3b95a6472f1f37d162f260752882029b3ba3559d66681ddf6c/merged major:0 minor:522 fsType:overlay blockSize:0} overlay_0-523:{mountpoint:/var/lib/containers/storage/overlay/39edf368bbacc9cf576af7b0f10fc22e7fa8680a0c3a75ab3801c70e41c14ad9/merged major:0 minor:523 fsType:overlay blockSize:0} overlay_0-53:{mountpoint:/var/lib/containers/storage/overlay/20bed8ad3dc08b7723803d9dcb5f11d1b0cc49627337045215f8d4e587f4f81b/merged major:0 minor:53 fsType:overlay blockSize:0} overlay_0-530:{mountpoint:/var/lib/containers/storage/overlay/a20b94609ae364c93e454270283eefb806f84b80feb45e218cede97da13be1f5/merged major:0 minor:530 fsType:overlay blockSize:0} overlay_0-533:{mountpoint:/var/lib/containers/storage/overlay/11ae6a7bd9ee44ea11b055c044b2e4699ba8b4ff0f767fed076adcb24fbd0eee/merged major:0 minor:533 fsType:overlay blockSize:0} overlay_0-536:{mountpoint:/var/lib/containers/storage/overlay/44b6c27522a235dab97527e723cedcf920f9b282a31d4006612e7b7e2760b6fd/merged major:0 minor:536 fsType:overlay blockSize:0} overlay_0-538:{mountpoint:/var/lib/containers/storage/overlay/3193d380d11279960378b39463e5222d4f80925967eae370adbf658e201c08ca/merged major:0 minor:538 fsType:overlay blockSize:0} overlay_0-545:{mountpoint:/var/lib/containers/storage/overlay/e4e4dff46334c609fefd55c24d525fb9661d288676e4f2bb77ee7d6b6c2be302/merged major:0 minor:545 fsType:overlay blockSize:0} overlay_0-547:{mountpoint:/var/lib/containers/storage/overlay/f626950006103f7fe623c820ce373dbd9cb7df3563ad5456bae31f4f330132e6/merged major:0 minor:547 fsType:overlay blockSize:0} overlay_0-552:{mountpoint:/var/lib/containers/storage/overlay/350db4fe45e619804adad16bae05c377279b7b4e702b0781f1dfa35e99dbc0f0/merged major:0 minor:552 fsType:overlay blockSize:0} overlay_0-553:{mountpoint:/var/lib/containers/storage/overlay/50de4f30003aa023392360822a547366717562fa6eeda2b1abad6bf28740588a/merged major:0 minor:553 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/8a7de2cabfb124040cf7d79624f829a1beef58eb21d69a63f8ab5f2339eaabe1/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-560:{mountpoint:/var/lib/containers/storage/overlay/a56eef05b994b1a58ab1149662111aecb44e9014462718c4482474afbc24a306/merged major:0 minor:560 fsType:overlay blockSize:0} overlay_0-567:{mountpoint:/var/lib/containers/storage/overlay/4f99197ff4df8acf9bfe31191d142259dab87b4316084f134577c49d6ff3caaa/merged major:0 minor:567 fsType:overlay blockSize:0} overlay_0-571:{mountpoint:/var/lib/containers/storage/overlay/6e4a8036118b58fc93559e5a2facd33013eb7b7ec7a6b3f059e5a62129ceb0bb/merged major:0 minor:571 fsType:overlay blockSize:0} overlay_0-572:{mountpoint:/var/lib/containers/storage/overlay/52cc07243c1486020841bec4976fcf73c66a56d02c7b0dce834688277d2e3fa7/merged major:0 minor:572 fsType:overlay blockSize:0} overlay_0-574:{mountpoint:/var/lib/containers/storage/overlay/8e7b4f7d72d43e5bf91bc12d3b87efbbe09c4e3e586935801c8abc3d1b89fe64/merged major:0 minor:574 fsType:overlay blockSize:0} overlay_0-576:{mountpoint:/var/lib/containers/storage/overlay/51ba6962192c3cd9c1ff4260349c97379a2a317d45fc875e688b3c47db701126/merged major:0 minor:576 fsType:overlay blockSize:0} overlay_0-578:{mountpoint:/var/lib/containers/storage/overlay/fb3e877a237396b67f7f33315b242f0aca8aadeddc84e5b9b83b98a490ad5da4/merged major:0 minor:578 fsType:overlay blockSize:0} overlay_0-580:{mountpoint:/var/lib/containers/storage/overlay/6369b40ad3862e39b0ae159ae30b6ff6f437bf55a580548452c7d5b80ae0b8f7/merged major:0 minor:580 fsType:overlay blockSize:0} overlay_0-582:{mountpoint:/var/lib/containers/storage/overlay/417885e253252d24d166db1015db75af516ae2a6ef208c54c4ad86efa640ac4f/merged major:0 minor:582 fsType:overlay blockSize:0} overlay_0-584:{mountpoint:/var/lib/containers/storage/overlay/e01d5d3705cbef42c07ee55376e4bccf9349888bcb97a3652bb30a1458c76a17/merged major:0 minor:584 fsType:overlay blockSize:0} overlay_0-586:{mountpoint:/var/lib/containers/storage/overlay/346f260ede8219947c1d1ee44e89e748f0a811a21f9547005ef21b4e54e9e41a/merged major:0 minor:586 fsType:overlay blockSize:0} overlay_0-588:{mountpoint:/var/lib/containers/storage/overlay/092ec7b0f69612d5fa3e9066e3312909cd7e0e7c9b23c5d3e07f3b00aaaad284/merged major:0 minor:588 fsType:overlay blockSize:0} overlay_0-590:{mountpoint:/var/lib/containers/storage/overlay/baf4164f540739c7ca154639c8589890f44285f41ef8436b51bb5d1b2fb39739/merged major:0 minor:590 fsType:overlay blockSize:0} overlay_0-592:{mountpoint:/var/lib/containers/storage/overlay/79abb49746214d9a9bc1aed2f30d281d8212d30004231a34242b0cc11c69d6a0/merged major:0 minor:592 fsType:overlay blockSize:0} overlay_0-594:{mountpoint:/var/lib/containers/storage/overlay/61250b9c7f28f9faad8d387b9104412e1d84634eb0a0fdc4f7c39e0469b2cd8e/merged major:0 minor:594 fsType:overlay blockSize:0} overlay_0-598:{mountpoint:/var/lib/containers/storage/overlay/3edda4ac0d78a2efc82f0a87c6878aae1211b1bea23fa56b3ee9f04145f1d672/merged major:0 minor:598 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/cad2a0e30de56b697ea6adf23a5ea1e3d93656624de651a711c1352aa545ea67/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-601:{mountpoint:/var/lib/containers/storage/overlay/ae9a93909f26e583ca61467f67f6f0bdc917a44d9d6fa68e70596b269c02b597/merged major:0 minor:601 fsType:overlay blockSize:0} overlay_0-602:{mountpoint:/var/lib/containers/storage/overlay/81ac0bf67d6c033a3b30ab353ca694bc6ac7c89ae6dd84f5cf4274d446aed1ae/merged major:0 minor:602 fsType:overlay blockSize:0} overlay_0-603:{mountpoint:/var/lib/containers/storage/overlay/ca4c45c8bd1398be91269532f5cddd27a4ff9e15b92cf4b938012be0ae6f1989/merged major:0 minor:603 fsType:overlay blockSize:0} overlay_0-620:{mountpoint:/var/lib/containers/storage/overlay/3f713d4114c9f96f74165591fa73e37bf8438d6d77371502d2dd600a088672be/merged major:0 minor:620 fsType:overlay blockSize:0} overlay_0-621:{mountpoint:/var/lib/containers/storage/overlay/1abfa117cfd474715a3fc262b3a43fa2e3433da2f64a46ab71cebb1c036c2fd6/merged major:0 minor:621 fsType:overlay blockSize:0} overlay_0-624:{mountpoint:/var/lib/containers/storage/overlay/2949a86b14e9fdbab8c6cc3facb1bef022a0c952c6a72823122830b60cdc97d0/merged major:0 minor:624 fsType:overlay blockSize:0} overlay_0-631:{mountpoint:/var/lib/containers/storage/overlay/144f161f4c9310b2f8704c207a81549156cc244b5f259efb24b87a8c214151c1/merged major:0 minor:631 fsType:overlay blockSize:0} overlay_0-633:{mountpoint:/var/lib/containers/storage/overlay/7eb64c9be67488c66b0be549e2c402da4115dcbe2092241cecb963c6dffc4977/merged major:0 minor:633 fsType:overlay blockSize:0} overlay_0-661:{mountpoint:/var/lib/containers/storage/overlay/88442926ee95f2092f03835915bf9acb9d8903a2948ecd26835bc4f36b62c340/merged major:0 minor:661 fsType:overlay blockSize:0} overlay_0-664:{mountpoint:/var/lib/containers/storage/overlay/d01511a86c76fe31115b9c19427f20f2a678bf7de0d429f4f2fda858ec489289/merged major:0 minor:664 fsType:overlay blockSize:0} overlay_0-673:{mountpoint:/var/lib/containers/storage/overlay/d7589169c7c58ac932c17a5f6c554ab16fc33e0ae424bd6c47e267734e8e2361/merged major:0 minor:673 fsType:overlay blockSize:0} overlay_0-687:{mountpoint:/var/lib/containers/storage/overlay/e5735198ed92bc03aa0a70e7c74b26ba35a0a540afe1a0ed14ca4144c1598a1b/merged major:0 minor:687 fsType:overlay blockSize:0} overlay_0-704:{mountpoint:/var/lib/containers/storage/overlay/6189ee26d5d7c4f0f706052dc9804e90474c1c36c1f3cb5a67e090573f882d56/merged major:0 minor:704 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/d87fe1393eee0901faccea9ce40d3238f0ca1abec5096b0cb771042b6637c7a6/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-717:{mountpoint:/var/lib/containers/storage/overlay/cc34e7106723d4c8ad66ddce59c7811688ccd89dd21725e9141e6ce9c8f632bd/merged major:0 minor:717 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/54e5bf3b7726353fc18ba1dc5240b72ddbed572cbdf5fbdfca71556c91d84cc1/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-728:{mountpoint:/var/lib/containers/storage/overlay/d657818ea48adac26d80d9e809242ef946da766472ae5aa6de302d9baad2b58d/merged major:0 minor:728 fsType:overlay blockSize:0} overlay_0-741:{mountpoint:/var/lib/containers/storage/overlay/2e738cb794b92b773e19fc9acdb1831be5a052d4a3dce048b62d1d3ee900b4cc/merged major:0 minor:741 fsType:overlay blockSize:0} overlay_0-743:{mountpoint:/var/lib/containers/storage/overlay/7a0623b47e3018c6f7d98d88ff2c30a9286f36e1fcc5af5ab6242190df7ab437/merged major:0 minor:743 fsType:overlay blockSize:0} overlay_0-754:{mountpoint:/var/lib/containers/storage/overlay/909339d0a6df29ce276e80a76faac37e089301c06c1363f69559aa278c9ca117/merged major:0 minor:754 fsType:overlay blockSize:0} overlay_0-758:{mountpoint:/var/lib/containers/storage/overlay/81e2fd58b3e73d69229bc2cf53130d15b5c110407fd9032868b61aa4dded65e4/merged major:0 minor:758 fsType:overlay blockSize:0} overlay_0-766:{mountpoint:/var/lib/containers/storage/overlay/9ad0e483ae19f21ec029eb7966624c134247c2d3b4f73eed2433bf3784e0f27c/merged major:0 minor:766 fsType:overlay blockSize:0} overlay_0-771:{mountpoint:/var/lib/containers/storage/overlay/3ed274618ce5edb007c40d775276d466797ff2cf311237fc4a5c2d86f04ebe4e/merged major:0 minor:771 fsType:overlay blockSize:0} overlay_0-779:{mountpoint:/var/lib/containers/storage/overlay/e5d64e02abc8028ff93ffd94aa55ecafb46a9c975e9e518188c7dabbd3cf14a8/merged major:0 minor:779 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/6a826e36abbb108250c4dd52b4eef9230103419d6462eadb5ca01d5c5f76b769/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-783:{mountpoint:/var/lib/containers/storage/overlay/b9753b61839a40f70c34d13186d69e62d5bd6de7156ac12d4263c3a068f45e05/merged major:0 minor:783 fsType:overlay blockSize:0} overlay_0-796:{mountpoint:/var/lib/containers/storage/overlay/0f32c6a858e1903b43aec14824b219585b374741334af5ec8f481df25e6c711c/merged major:0 minor:796 fsType:overlay blockSize:0} overlay_0-801:{mountpoint:/var/lib/containers/storage/overlay/a58bb396c224ebeccad2b2c19170638a3edf009b00680ca9739ead1127819fa3/merged major:0 minor:801 fsType:overlay blockSize:0} overlay_0-81:{mountpoint:/var/lib/containers/storage/overlay/21f716adc00a60ab263e565a6308d7fc524aab1d64fdbae6b17bf645e428d446/merged major:0 minor:81 fsType:overlay blockSize:0} overlay_0-827:{mountpoint:/var/lib/containers/storage/overlay/96c41cad20f19b51001f8b099fbcbfa08f17ec1a842b51057bef2adf04606f6e/merged major:0 minor:827 fsType:overlay blockSize:0} overlay_0-831:{mountpoint:/var/lib/containers/storage/overlay/6bae8ca1f0b2dff02524f013267208f276516f0a9e8ca64776b96c0757f52fb9/merged major:0 minor:831 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/feaa00957a29be6c1de05f5aae659c961ce8c000b3b09e71430aeb23bb6a82a3/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-850:{mountpoint:/var/lib/containers/storage/overlay/ef277ec86aff99ba2134ab8c39e52cdb8d2048b7f431debe9e159a7d064e79b3/merged major:0 minor:850 fsType:overlay blockSize:0} overlay_0-857:{mountpoint:/var/lib/containers/storage/overlay/f574c21f2aba05b968fa67eb2df6c0d56e4b6641df323f203eec5225823c582a/merged major:0 minor:857 fsType:overlay blockSize:0} overlay_0-862:{mountpoint:/var/lib/containers/storage/overlay/8b3e297ba27a1a09133342f3f2944b82bc5ac9038fabf95f4d828674643a11e4/merged major:0 minor:862 fsType:overlay blockSize:0} overlay_0-864:{mountpoint:/var/lib/containers/storage/overlay/5e7613f8837f78b6612359629fe677b671cfa46ccefcacb263a70ae518a81cd4/merged major:0 minor:864 fsType:overlay blockSize:0} overlay_0-866:{mountpoint:/var/lib/containers/storage/overlay/03bc610f08d5d6f0987658e93e61b42c12558fb4b1e5767425707d48d4cb3bbd/merged major:0 minor:866 fsType:overlay blockSize:0} overlay_0-868:{mountpoint:/var/lib/containers/storage/overlay/20757b7c7ec54aea5d7c2b2f0fc3d4538be09cdd69c6d9e6bebc646173b78f44/merged major:0 minor:868 fsType:overlay blockSize:0} overlay_0-874:{mountpoint:/var/lib/containers/storage/overlay/8f7cbf15908898815d34eddec57dc3ed13f13c546a03b76f55f8081874a35889/merged major:0 minor:874 fsType:overlay blockSize:0} overlay_0-886:{mountpoint:/var/lib/containers/storage/overlay/78fefdfa79ea6e518fe502bd404f2d8e2e00c746cb65f16f051ea700690381f0/merged major:0 minor:886 fsType:overlay blockSize:0} overlay_0-892:{mountpoint:/var/lib/containers/storage/overlay/af2f7e7f1ee29238820b4441932b3341edd71f1087eac316e0b7402e8de7d46b/merged major:0 minor:892 fsType:overlay blockSize:0} overlay_0-894:{mountpoint:/var/lib/containers/storage/overlay/eba47198eed9882f95a5d30d2dd4d44a2072965bfb8e48ae6dcf454caf451763/merged major:0 minor:894 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/f288bf387bd28c87558f40ad5225e21e1f73c58039e7d373e8b8af63d2e1f0d4/merged major: Mar 18 13:23:42.639739 master-0 kubenswrapper[28504]: 0 minor:90 fsType:overlay blockSize:0} overlay_0-910:{mountpoint:/var/lib/containers/storage/overlay/ceb3f1eabced39ef84af2ed05a64bb73ee6a389b0d9957777a35c0e735b66a06/merged major:0 minor:910 fsType:overlay blockSize:0} overlay_0-912:{mountpoint:/var/lib/containers/storage/overlay/382cce04b837c369d9af3d08c88e23ab9f881581d6fd7713cdd48d503db513e5/merged major:0 minor:912 fsType:overlay blockSize:0} overlay_0-93:{mountpoint:/var/lib/containers/storage/overlay/a29a13948144fdcedf5356b514f3cf51c6db2004aa546b196a174a4de2b9c30e/merged major:0 minor:93 fsType:overlay blockSize:0} overlay_0-933:{mountpoint:/var/lib/containers/storage/overlay/bd555b349cda7e923e4d8c31fe2bde69c7f8b669fccc1846f717a453a75a02b0/merged major:0 minor:933 fsType:overlay blockSize:0} overlay_0-936:{mountpoint:/var/lib/containers/storage/overlay/9bc5b716288493cca64763d9920bc1c674d163a45bf83ff79d0a1f5f2c790b80/merged major:0 minor:936 fsType:overlay blockSize:0} overlay_0-938:{mountpoint:/var/lib/containers/storage/overlay/268f1f7ecf6a3044bebc28d1c6d2518aeef7461657874f85fcfca362924cf10b/merged major:0 minor:938 fsType:overlay blockSize:0} overlay_0-949:{mountpoint:/var/lib/containers/storage/overlay/d67eb50892aa709104cc2a29578c1410da2977ff7b9d874922f23f3907c97556/merged major:0 minor:949 fsType:overlay blockSize:0} overlay_0-952:{mountpoint:/var/lib/containers/storage/overlay/329833650a24fee0f74ac6bb2af2aaa24ace74dfec723f37b8d69119755c7aaf/merged major:0 minor:952 fsType:overlay blockSize:0} overlay_0-966:{mountpoint:/var/lib/containers/storage/overlay/6cd08595e104f2335b067a16a6b425b365d7e126ab2afe9a04548266d0b656f3/merged major:0 minor:966 fsType:overlay blockSize:0} overlay_0-974:{mountpoint:/var/lib/containers/storage/overlay/7eb02d7bc8152c2a7b6021ff3d42bf0abce78e2f5d6f666476aab18a95aedfbc/merged major:0 minor:974 fsType:overlay blockSize:0} overlay_0-978:{mountpoint:/var/lib/containers/storage/overlay/113b6e335c9d239b3e52ac2b8cdabada45b2a439222599ace7c2e601cb61e347/merged major:0 minor:978 fsType:overlay blockSize:0} overlay_0-983:{mountpoint:/var/lib/containers/storage/overlay/d7bb3327f6b51a4d38cb1828073b3a960cf49931cf006620fb1bb9e748e5dd8d/merged major:0 minor:983 fsType:overlay blockSize:0} overlay_0-984:{mountpoint:/var/lib/containers/storage/overlay/5baff374d1f23fe56d6d5bb85c8936a3ee8bd0c2db8a0e198904354467aefb94/merged major:0 minor:984 fsType:overlay blockSize:0} overlay_0-995:{mountpoint:/var/lib/containers/storage/overlay/461105d785930058bb5e8e18998ab2c384782944d2af3b0ed23ac051ceed465b/merged major:0 minor:995 fsType:overlay blockSize:0} overlay_0-999:{mountpoint:/var/lib/containers/storage/overlay/33a77b2cd0a52f3ceea58749487607222c93ffba01c648d741d785c0ac62b467/merged major:0 minor:999 fsType:overlay blockSize:0}] Mar 18 13:23:42.678911 master-0 kubenswrapper[28504]: I0318 13:23:42.677698 28504 manager.go:217] Machine: {Timestamp:2026-03-18 13:23:42.676988976 +0000 UTC m=+0.171794761 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ba707060b4b44f7a95adbd0306be6534 SystemUUID:ba707060-b4b4-4f7a-95ad-bd0306be6534 BootID:d4169b54-c5ea-4f66-b18c-82f9506641bd Filesystems:[{Device:overlay_0-93 DeviceMajor:0 DeviceMinor:93 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/36db10b8-33a2-4b54-85e2-9809eb6bc37d/volumes/kubernetes.io~projected/kube-api-access-bkdqs DeviceMajor:0 DeviceMinor:237 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c9a9baa5-9334-47dc-8d0c-eafc96a679b3/volumes/kubernetes.io~projected/kube-api-access-z9tzl DeviceMajor:0 DeviceMinor:238 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-419 DeviceMajor:0 DeviceMinor:419 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dbe3dbed53c9224d7868cbcc5f61d9bc6a0fe24d17380d115f4b59ffe8620443/userdata/shm DeviceMajor:0 DeviceMinor:401 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/375d5112-d2be-47cf-bee1-82614ba71ed8/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:780 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-574 DeviceMajor:0 DeviceMinor:574 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3c0d0048-6d96-459c-8742-2f092af44a6a/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1075 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b79758b7-9129-496c-abec-80d455648454/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1123 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-334 DeviceMajor:0 DeviceMinor:334 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-358 DeviceMajor:0 DeviceMinor:358 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a5a93d05-3c8e-4666-9a55-d8f9e902db78/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:737 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-53 DeviceMajor:0 DeviceMinor:53 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:232 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-560 DeviceMajor:0 DeviceMinor:560 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-51 DeviceMajor:0 DeviceMinor:51 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f2b92a53-0b61-4e1d-a306-f9a498e48b38/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:379 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1a378c5d453113157a9411837f552a7009188322a9c41c64301dc36db4c9e17e/userdata/shm DeviceMajor:0 DeviceMinor:636 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1053 DeviceMajor:0 DeviceMinor:1053 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-470 DeviceMajor:0 DeviceMinor:470 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-149 DeviceMajor:0 DeviceMinor:149 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-984 DeviceMajor:0 DeviceMinor:984 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1102 DeviceMajor:0 DeviceMinor:1102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1107 DeviceMajor:0 DeviceMinor:1107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1141 DeviceMajor:0 DeviceMinor:1141 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-304 DeviceMajor:0 DeviceMinor:304 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/93c3e972c1d72b8d1ee15395999be03050512e051706f9a30dccebe0b0487b51/userdata/shm DeviceMajor:0 DeviceMinor:839 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-449 DeviceMajor:0 DeviceMinor:449 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1f5a6ee5a82f28ebea2649b710d2502f72b2b11fe536e2a60ed0b6577c615a5e/userdata/shm DeviceMajor:0 DeviceMinor:395 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-547 DeviceMajor:0 DeviceMinor:547 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cad2dea033992ed333b90156af54dbe232cb8e77ea3617a7c7559f870c46bf61/userdata/shm DeviceMajor:0 DeviceMinor:257 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-545 DeviceMajor:0 DeviceMinor:545 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b856d226-a137-4954-82c5-5929d579387a/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1076 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-827 DeviceMajor:0 DeviceMinor:827 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c2ec5cc34fdd3560c731a9122a146c883bd92213bb1def0bc3e3795f4b6dca24/userdata/shm DeviceMajor:0 DeviceMinor:393 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-571 DeviceMajor:0 DeviceMinor:571 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1010 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4bc77989-ecfc-4500-92a0-18c2b3b78408/volumes/kubernetes.io~projected/kube-api-access-brvlj DeviceMajor:0 DeviceMinor:126 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8ce8e99d-7b02-4bf4-a438-adde851918cb/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6521ed821b17acabe4b6b4013792bafdd43c6335da5eba7b335ddb8b9407cf09/userdata/shm DeviceMajor:0 DeviceMinor:87 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1179 DeviceMajor:0 DeviceMinor:1179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-292 DeviceMajor:0 DeviceMinor:292 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/65cfa12a-0711-4fba-8859-73a3f8f250a9/volumes/kubernetes.io~projected/kube-api-access-xhzcj DeviceMajor:0 DeviceMinor:689 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8ce8e99d-7b02-4bf4-a438-adde851918cb/volumes/kubernetes.io~projected/kube-api-access-r8dfw DeviceMajor:0 DeviceMinor:225 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/93ea3c78-dede-468f-89a5-551133f794c5/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:246 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-783 DeviceMajor:0 DeviceMinor:783 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0e5aebf642fb9f996565bf333412adf5ef6e32356850ce107ed2ae531c959857/userdata/shm DeviceMajor:0 DeviceMinor:1173 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/234a5a6c-3790-49d0-b1e7-86f81048d96a/volumes/kubernetes.io~projected/kube-api-access-pp5xj DeviceMajor:0 DeviceMinor:456 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/330df925-8429-4b96-9bfe-caa017c21afa/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:442 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-936 DeviceMajor:0 DeviceMinor:936 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/92e396cd-a0d9-4b6b-9d82-add1ce2a8712/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1009 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f8cc997e3f27ce3fc910341ff80d8b564acb4ef4acb174e7ab70b72471e906fc/userdata/shm DeviceMajor:0 DeviceMinor:1130 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c7de43cf6bf0c5d7b2b878ebc5990ddb62b5d5e375bde178cb4882acdf2057b0/userdata/shm DeviceMajor:0 DeviceMinor:841 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-894 DeviceMajor:0 DeviceMinor:894 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-952 DeviceMajor:0 DeviceMinor:952 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-801 DeviceMajor:0 DeviceMinor:801 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-338 DeviceMajor:0 DeviceMinor:338 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-862 DeviceMajor:0 DeviceMinor:862 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-284 DeviceMajor:0 DeviceMinor:284 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/baeb6380-95e4-4e10-9798-e1e22f20bade/volumes/kubernetes.io~projected/kube-api-access-xlm4c DeviceMajor:0 DeviceMinor:461 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e4d0b174-33e4-46ee-863b-b5cc2a271b85/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:380 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6f92bee18602c78e97abff330426051be6816bfa6a663d5ddee07fcf7b81c8a2/userdata/shm DeviceMajor:0 DeviceMinor:1016 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-511 DeviceMajor:0 DeviceMinor:511 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3c0d0048-6d96-459c-8742-2f092af44a6a/volumes/kubernetes.io~projected/kube-api-access-2s9rk DeviceMajor:0 DeviceMinor:1079 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/cb471665-2b07-48df-9881-3fb663390b23/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b41c9132-92ef-429d-bdd5-9bdb024e04fc/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:475 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-499 DeviceMajor:0 DeviceMinor:499 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1024 DeviceMajor:0 DeviceMinor:1024 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1118 DeviceMajor:0 DeviceMinor:1118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/85215d2477770d9146270870ad0a93b56946079eb831a104bd441b36e0111190/userdata/shm DeviceMajor:0 DeviceMinor:457 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bd033b5b-af07-4e69-9a5c-46f7c9bde95a/volumes/kubernetes.io~projected/kube-api-access-5475b DeviceMajor:0 DeviceMinor:838 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-536 DeviceMajor:0 DeviceMinor:536 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b41c9132-92ef-429d-bdd5-9bdb024e04fc/volumes/kubernetes.io~projected/kube-api-access-wlbm6 DeviceMajor:0 DeviceMinor:477 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7f19ee16fbfcf73db21dbee51bcb45264558bf405e040985a801120ef73b113c/userdata/shm DeviceMajor:0 DeviceMinor:479 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-584 DeviceMajor:0 DeviceMinor:584 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1092 DeviceMajor:0 DeviceMinor:1092 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/41d00c84c5dd2a7b766779b7fb3bdc8cf10f974b3a66441fa0ef99e90cc55075/userdata/shm DeviceMajor:0 DeviceMinor:168 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8cfa9195fd91aaa41473c2e4d0c90829d891ed3f5c7a55b7f1376df3f2ef829a/userdata/shm DeviceMajor:0 DeviceMinor:485 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f3c106be-27ea-4849-b365-eff6d25f5e71/volumes/kubernetes.io~projected/kube-api-access-hthf8 DeviceMajor:0 DeviceMinor:901 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0f16e797-a619-46a8-948a-9fdfc8a9891f/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:604 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-633 DeviceMajor:0 DeviceMinor:633 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7fa6920b-f7d9-4758-bba9-356a2c8b1b67/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:829 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ae165efde01e25d890b70e74ec7c26c2fa71fdd6d466511fae93c4948c21b840/userdata/shm DeviceMajor:0 DeviceMinor:848 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-624 DeviceMajor:0 DeviceMinor:624 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8a0944d2-d99a-42eb-81f5-a212b750b8f4/volumes/kubernetes.io~projected/kube-api-access-882b8 DeviceMajor:0 DeviceMinor:94 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-588 DeviceMajor:0 DeviceMinor:588 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1055 DeviceMajor:0 DeviceMinor:1055 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b79758b7-9129-496c-abec-80d455648454/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1128 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/369e9689-e2f6-4276-b096-8db094f8d6ae/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:378 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-403 DeviceMajor:0 DeviceMinor:403 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/35d8f08f-4c57-44e0-8e8f-3969287e2a5a/volumes/kubernetes.io~projected/kube-api-access-q6d7j DeviceMajor:0 DeviceMinor:83 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5a715e53-1874-4993-93d1-504c3470a6f5/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1036 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d74caa04ea7449a0740efe2024b1988d41d0ee2f12b8a3006dbde07602a641f4/userdata/shm DeviceMajor:0 DeviceMinor:528 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c2c4572e-0b38-4db1-96e5-6a35e29048e7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-673 DeviceMajor:0 DeviceMinor:673 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-874 DeviceMajor:0 DeviceMinor:874 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5a715e53-1874-4993-93d1-504c3470a6f5/volumes/kubernetes.io~projected/kube-api-access-99mks DeviceMajor:0 DeviceMinor:1048 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-538 DeviceMajor:0 DeviceMinor:538 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/234a5a6c-3790-49d0-b1e7-86f81048d96a/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:453 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-758 DeviceMajor:0 DeviceMinor:758 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-995 DeviceMajor:0 DeviceMinor:995 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-743 DeviceMajor:0 DeviceMinor:743 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f3c106be-27ea-4849-b365-eff6d25f5e71/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:900 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7dfbe5ed23f58a4b2b795d3c941f199f4ff38f6453094d9db8bcf00a90c533d5/userdata/shm DeviceMajor:0 DeviceMinor:478 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5a715e53-1874-4993-93d1-504c3470a6f5/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1037 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-427 DeviceMajor:0 DeviceMinor:427 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-771 DeviceMajor:0 DeviceMinor:771 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b856d226-a137-4954-82c5-5929d579387a/volumes/kubernetes.io~projected/kube-api-access-n2msq DeviceMajor:0 DeviceMinor:1081 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/13d61ed6ba86dc97c981be717623436660fa98958fd1c017e06b3a4ec064f769/userdata/shm DeviceMajor:0 DeviceMinor:251 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-294 DeviceMajor:0 DeviceMinor:294 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b41c9132-92ef-429d-bdd5-9bdb024e04fc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:473 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2385db6b-4286-4839-822c-aa9c52290172/volumes/kubernetes.io~projected/kube-api-access-d27hr DeviceMajor:0 DeviceMinor:847 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-117 DeviceMajor:0 DeviceMinor:117 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2fb5e5e8607f93dafe9cc4e7936985507a00d052cc2ac3e0c096e4455936f109/userdata/shm DeviceMajor:0 DeviceMinor:786 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7a951627-c032-4846-821c-c4bcbf4a91b9/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:830 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-288 DeviceMajor:0 DeviceMinor:288 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-831 DeviceMajor:0 DeviceMinor:831 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:968 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-983 DeviceMajor:0 DeviceMinor:983 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1057 DeviceMajor:0 DeviceMinor:1057 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-81 DeviceMajor:0 DeviceMinor:81 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/80f499fc9da9c64215ee87b646e32e10113a3a485956cfb50964722faffe7405/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/2385db6b-4286-4839-822c-aa9c52290172/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:822 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/eb8907fd-35dd-452a-8032-f2f95a6e553a/volumes/kubernetes.io~projected/kube-api-access-k254v DeviceMajor:0 DeviceMinor:138 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-978 DeviceMajor:0 DeviceMinor:978 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8207c4419d89bbef00d1216664ff051dff0278775861444c1650cbc77aa43b89/userdata/shm DeviceMajor:0 DeviceMinor:261 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fac381b9cc8f57cf484158ad2ce85bb2e249b3b6d72791568b3a2df54f9f4083/userdata/shm DeviceMajor:0 DeviceMinor:318 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1134 DeviceMajor:0 DeviceMinor:1134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-314 DeviceMajor:0 DeviceMinor:314 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/17adbc1a-f29c-4278-b29a-0cc3879b753f/volumes/kubernetes.io~projected/kube-api-access-v6sr4 DeviceMajor:0 DeviceMinor:992 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-425 DeviceMajor:0 DeviceMinor:425 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-868 DeviceMajor:0 DeviceMinor:868 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2cad2401-dab1-49f7-870e-a742ebfe323f/volumes/kubernetes.io~projected/kube-api-access-rv9m7 DeviceMajor:0 DeviceMinor:317 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1c625ab74e01dd5316e14886f1962977aaeec6d850dd1b7dad1e5cfa9c9c4cad/userdata/shm DeviceMajor:0 DeviceMinor:484 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-400 DeviceMajor:0 DeviceMinor:400 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1175 DeviceMajor:0 DeviceMinor:1175 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-312 DeviceMajor:0 DeviceMinor:312 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a890ba92b025096e34e81f53a6cf37b1fcac472b14f9584479797572ac09eeb3/userdata/shm DeviceMajor:0 DeviceMinor:856 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/volumes/kubernetes.io~projected/kube-api-access-4djxt DeviceMajor:0 DeviceMinor:971 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a01c92f5-7938-437d-8262-11598bd8023c/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:381 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/22211cbad9660f7fa5d4af7845deabe61016d175198690a7f0bcdcb8c8f30f63/userdata/shm DeviceMajor:0 DeviceMinor:1049 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/107bbfc4822b298b178da7c2027a8844c3612176c3e5d6fcb31db24eadcd1790/userdata/shm DeviceMajor:0 DeviceMinor:67 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-598 DeviceMajor:0 DeviceMinor:598 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-112 DeviceMajor:0 DeviceMinor:112 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-147 DeviceMajor:0 DeviceMinor:147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b41c9132-92ef-429d-bdd5-9bdb024e04fc/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:474 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-572 DeviceMajor:0 DeviceMinor:572 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:228 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f2b92a53-0b61-4e1d-a306-f9a498e48b38/volumes/kubernetes.io~projected/kube-api-access-j5mgr DeviceMajor:0 DeviceMinor:235 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/da6a763d-2777-40c4-ae1f-c77ced406ea2/volumes/kubernetes.io~projected/kube-api-access-lhqk9 DeviceMajor:0 DeviceMinor:245 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5bccf60c-5b07-4f40-8430-12bfb62661c7/volumes/kubernetes.io~projected/kube-api-access-4b6rn DeviceMajor:0 DeviceMinor:243 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/35925474-e3fe-4cff-aad6-d853816618c7/volumes/kubernetes.io~projected/kube-api-access-dzblt DeviceMajor:0 DeviceMinor:247 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-866 DeviceMajor:0 DeviceMinor:866 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0f16e797-a619-46a8-948a-9fdfc8a9891f/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:605 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/16a930da-d793-486f-bcef-cf042d3c427d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2bf4b712cae2c0ee4c12f11a2e43506e7388879dae59520e9018e8abfe05f277/userdata/shm DeviceMajor:0 DeviceMinor:255 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1018 DeviceMajor:0 DeviceMinor:1018 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-276 DeviceMajor:0 DeviceMinor:276 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-602 DeviceMajor:0 DeviceMinor:602 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bc9af4af-fb39-4a51-83ae-dab3f1d159f2/volumes/kubernetes.io~projected/kube-api-access-twczm DeviceMajor:0 DeviceMinor:1172 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8a0944d2-d99a-42eb-81f5-a212b750b8f4/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-103 DeviceMajor:0 DeviceMinor:103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5e691486-8540-4b79-8eed-b0fb829071db/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:468 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/169cee91f89c2bf08a085d418adf6f39cab225d960227b563e10d5f8629dd9c5/userdata/shm DeviceMajor:0 DeviceMinor:853 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1132 DeviceMajor:0 DeviceMinor:1132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-864 DeviceMajor:0 DeviceMinor:864 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/kubelet/pods/375d5112-d2be-47cf-bee1-82614ba71ed8/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:778 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1086 DeviceMajor:0 DeviceMinor:1086 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b79758b7-9129-496c-abec-80d455648454/volumes/kubernetes.io~projected/kube-api-access-lpdw6 DeviceMajor:0 DeviceMinor:1129 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1177 DeviceMajor:0 DeviceMinor:1177 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-423 DeviceMajor:0 DeviceMinor:423 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-999 DeviceMajor:0 DeviceMinor:999 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-101 DeviceMajor:0 DeviceMinor:101 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/513bdda53b682c95f37d2cf2baf57e4a5453627fbbd061d754ec2aa3ba42bd1d/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a/volumes/kubernetes.io~projected/kube-api-access-7b29z DeviceMajor:0 DeviceMinor:443 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-590 DeviceMajor:0 DeviceMinor:590 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8f59a12b-d690-44c5-972c-fb4b0b5819f1/volumes/kubernetes.io~projected/kube-api-access-8kpz5 DeviceMajor:0 DeviceMinor:476 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-938 DeviceMajor:0 DeviceMinor:938 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-160 DeviceMajor:0 DeviceMinor:160 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c2c4572e-0b38-4db1-96e5-6a35e29048e7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:229 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a01c92f5-7938-437d-8262-11598bd8023c/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:374 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f2b92a53-0b61-4e1d-a306-f9a498e48b38/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:234 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6ae0c5f6306fcc2bc4d200c31e8ec02db83741ac24faf2d432c77d6884f24b98/userdata/shm DeviceMajor:0 DeviceMinor:1051 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c074751c-6b3c-44df-aca5-42fa69662779/volumes/kubernetes.io~projected/kube-api-access-bbztv DeviceMajor:0 DeviceMinor:846 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-620 DeviceMajor:0 DeviceMinor:620 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ee1eb80b-5a76-443f-a534-54d5bdc0c98a/volumes/kubernetes.io~projected/kube-api-access-qvxs4 DeviceMajor:0 DeviceMinor:239 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:438 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/375d5112-d2be-47cf-bee1-82614ba71ed8/volumes/kubernetes.io~projected/kube-api-access-d4dcj DeviceMajor:0 DeviceMinor:781 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6f72f96fe981864c7efed48f7ec73353e9a984bf6f9e3b23eec1a4ed414c6dbd/userdata/shm DeviceMajor:0 DeviceMinor:972 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-109 DeviceMajor:0 DeviceMinor:109 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-567 DeviceMajor:0 DeviceMinor:567 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e11212398038431cec2938c7b56aa6395f70dc7ec5d7eb01558cbbe8ba561643/userdata/shm DeviceMajor:0 DeviceMinor:1084 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a296b3a4304897ed7576fe44b518ce3d5fa743a93c59d520901b7af6be80a014/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7425e13d893a722522240c3707c6140f8bfd0028da6287165144b7322ebf69c4/userdata/shm DeviceMajor:0 DeviceMinor:274 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-286 DeviceMajor:0 DeviceMinor:286 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-466 DeviceMajor:0 DeviceMinor:466 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1064 DeviceMajor:0 DeviceMinor:1064 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-165 DeviceMajor:0 DeviceMinor:165 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-363 DeviceMajor:0 DeviceMinor:363 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/47f82c03-65d1-4a6c-ba09-8a00ae778009/volumes/kubernetes.io~projected/kube-api-access-ghzrb DeviceMajor:0 DeviceMinor:230 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1ad580a2-7f58-4d66-adad-0a53d9777655/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3a039fc2-b0af-4b2c-a884-1c274c08064d/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:347 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6ed4f640-d481-4e7a-92eb-f0eda82e138c/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1074 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3c0d0048-6d96-459c-8742-2f092af44a6a/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1070 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ee1eb80b-5a76-443f-a534-54d5bdc0c98a/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:469 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da6a763d-2777-40c4-ae1f-c77ced406ea2/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:377 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6f29c4b1c1fd21881be5b0c8c3cbe035d4334c4ad23b7061f15e1ade0751024e/userdata/shm DeviceMajor:0 DeviceMinor:489 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf/volumes/kubernetes.io~projected/kube-api-access-w6bfw DeviceMajor:0 DeviceMinor:1012 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-717 DeviceMajor:0 DeviceMinor:717 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d36791810cb2ff2b559bc157f15e244f7a2e4ce2859637a7bd7a82ed7e5c1136/userdata/shm DeviceMajor:0 DeviceMinor:399 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f3d8252ff99e6f3ec6168c39c11836a42f248fb2decc89a0e7aa350479c27f97/userdata/shm DeviceMajor:0 DeviceMinor:259 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-280 DeviceMajor:0 DeviceMinor:280 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/369e9689-e2f6-4276-b096-8db094f8d6ae/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:375 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-552 DeviceMajor:0 DeviceMinor:552 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-912 DeviceMajor:0 DeviceMinor:912 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-505 DeviceMajor:0 DeviceMinor:505 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-582 DeviceMajor:0 DeviceMinor:582 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5e691486-8540-4b79-8eed-b0fb829071db/volumes/kubernetes.io~projected/kube-api-access-lpl28 DeviceMajor:0 DeviceMinor:123 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-553 DeviceMajor:0 DeviceMinor:553 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/734f9f10-5bde-44d5-a831-021b93fd667d/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:859 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-367 DeviceMajor:0 DeviceMinor:367 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d2e64e1e8754957863bad8639f4beaf999396133b2b69117105f95cd95cc7cf9/userdata/shm DeviceMajor:0 DeviceMinor:472 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a5a93d05-3c8e-4666-9a55-d8f9e902db78/volumes/kubernetes.io~projected/kube-api-access-mthwt DeviceMajor:0 DeviceMinor:774 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c074751c-6b3c-44df-aca5-42fa69662779/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:795 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6ed4f640-d481-4e7a-92eb-f0eda82e138c/volumes/kubernetes.io~projected/kube-api-access-xhmmv DeviceMajor:0 DeviceMinor:1080 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971/volumes/kubernetes.io~projected/kube-api-access-qwfnk DeviceMajor:0 DeviceMinor:271 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-850 DeviceMajor:0 DeviceMinor:850 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1ad580a2-7f58-4d66-adad-0a53d9777655/volumes/kubernetes.io~projected/kube-api-access-cw64j DeviceMajor:0 DeviceMinor:233 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/cb471665-2b07-48df-9881-3fb663390b23/volumes/kubernetes.io~projected/kube-api-access-6f8xk DeviceMajor:0 DeviceMinor:242 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c0d9adef366d9f45b6f81e678d5b5bc6f1e841f8a49fa5033e91c2416ca478ff/userdata/shm DeviceMajor:0 DeviceMinor:269 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9cdb7659f9e5befc4b423f8f01e97091301553ed5776dec5e04ebf95f793c39d/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8385307c04cfef148742b5dc0fc754e1e2dc3ea11d3ddc8ec5d773d4246273b6/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:440 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-857 DeviceMajor:0 DeviceMinor:857 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-779 DeviceMajor:0 DeviceMinor:779 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-413 DeviceMajor:0 DeviceMinor:413 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-278 DeviceMajor:0 DeviceMinor:278 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7fa6920b-f7d9-4758-bba9-356a2c8b1b67/volumes/kubernetes.io~projected/kube-api-access-w9jhr DeviceMajor:0 DeviceMinor:833 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-530 DeviceMajor:0 DeviceMinor:530 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-592 DeviceMajor:0 DeviceMinor:592 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-124 DeviceMajor:0 DeviceMinor:124 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3a039fc2-b0af-4b2c-a884-1c274c08064d/volumes/kubernetes.io~projected/kube-api-access-pmmhd DeviceMajor:0 DeviceMinor:351 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0f16e797-a619-46a8-948a-9fdfc8a9891f/volumes/kubernetes.io~projected/kube-api-access-q6b9b DeviceMajor:0 DeviceMinor:613 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-886 DeviceMajor:0 DeviceMinor:886 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1028 DeviceMajor:0 DeviceMinor:1028 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eb8907fd-35dd-452a-8032-f2f95a6e553a/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:139 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-406 DeviceMajor:0 DeviceMinor:406 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-342 DeviceMajor:0 DeviceMinor:342 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-348 DeviceMajor:0 DeviceMinor:348 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-586 DeviceMajor:0 DeviceMinor:586 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-974 DeviceMajor:0 DeviceMinor:974 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-512 DeviceMajor:0 DeviceMinor:512 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1005 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f257d90986f3bc5c917783e713efe22ea2b8502b23f0e13b32408883ab3d2ef8/userdata/shm DeviceMajor:0 DeviceMinor:1088 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1094 DeviceMajor:0 DeviceMinor:1094 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f4b4a14ca39c077cf2f2b693e9d1fdd626838af6ef821c8cff1c28e6bf2b25ae/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-462 DeviceMajor:0 DeviceMinor:462 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-621 DeviceMajor:0 DeviceMinor:621 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a44d9eb65400a7e0c0da7a14a1ecf19a155dd4cc1a996834044260457aba64a9/userdata/shm DeviceMajor:0 DeviceMinor:97 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b856d226-a137-4954-82c5-5929d579387a/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1078 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-601 DeviceMajor:0 DeviceMinor:601 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/02879f34-7062-4f07-9a5a-f965103d9182/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1046 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/880004505fafdd74bc0fa1479c8dc9293b280d360df6bd0f451f11d33a5d6e7c/userdata/shm DeviceMajor:0 DeviceMinor:787 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-892 DeviceMajor:0 DeviceMinor:892 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/02879f34-7062-4f07-9a5a-f965103d9182/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:1045 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-766 DeviceMajor:0 DeviceMinor:766 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d7625d2cd327e3cafffe87f32286c7b0cc92c9be78c6e712456c0ec63d1a75aa/userdata/shm DeviceMajor:0 DeviceMinor:482 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-580 DeviceMajor:0 DeviceMinor:580 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-687 DeviceMajor:0 DeviceMinor:687 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0dfd132ca6d17d71f64272cbf05802b2cf41d07648dbd09346eab0774ba709b2/userdata/shm DeviceMajor:0 DeviceMinor:775 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1d24886475858f5204b2d9c7e7c2ee25187c07d787043c501faeeb9daa42c75a/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/93ea3c78-dede-468f-89a5-551133f794c5/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/contai Mar 18 13:23:42.679536 master-0 kubenswrapper[28504]: ners/storage/overlay-containers/8c177b73cce0c7f3cc26e5c3b6432debd234f03c681f0879af00f2a71a8d7119/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/35925474-e3fe-4cff-aad6-d853816618c7/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:411 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6c7a102b9c64081966ad588bf6d34058c0849b6b42caa6a8951b5cab3df0847b/userdata/shm DeviceMajor:0 DeviceMinor:843 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1095 DeviceMajor:0 DeviceMinor:1095 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-910 DeviceMajor:0 DeviceMinor:910 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/37139cfc3c201b83f82c4c778e201e9e4fa5f476ed738dc1d77b51b256fa3f72/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8eb2fe8ff8be73af78d1650987f57fe06fd99e27a3b3400525c12b3ce524c93c/userdata/shm DeviceMajor:0 DeviceMinor:384 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/350c4fb60f4e9bdb03e757c1222dc19a3a32f7097be5c0e8e5c054e3859ca25c/userdata/shm DeviceMajor:0 DeviceMinor:390 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a2f9634bc26fc4102ec0a118fdd84688c4a5ae575980f29492ab02ddd33ee35a/userdata/shm DeviceMajor:0 DeviceMinor:649 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1026 DeviceMajor:0 DeviceMinor:1026 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-933 DeviceMajor:0 DeviceMinor:933 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/330df925-8429-4b96-9bfe-caa017c21afa/volumes/kubernetes.io~projected/kube-api-access-2sqzx DeviceMajor:0 DeviceMinor:244 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9ca94153-9d1a-4b0a-a3eb-556e85f2e875/volumes/kubernetes.io~projected/kube-api-access-hbksj DeviceMajor:0 DeviceMinor:383 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:805 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/47f82c03-65d1-4a6c-ba09-8a00ae778009/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:364 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/feef592bfb9171a37aa394c51fc21738e74cfa163f594aa5160554c22d6d35c6/userdata/shm DeviceMajor:0 DeviceMinor:487 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9a15447edfc940cd5b4dde2df7e8e6360f5b93278864866c14e686e33bd8d32a/userdata/shm DeviceMajor:0 DeviceMinor:88 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4/volumes/kubernetes.io~projected/kube-api-access-h8v5n DeviceMajor:0 DeviceMinor:236 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/16a930da-d793-486f-bcef-cf042d3c427d/volumes/kubernetes.io~projected/kube-api-access-5gv8b DeviceMajor:0 DeviceMinor:240 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:382 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/55d7b7fe63240a7ae9d576fcdf869561b098f0744d49a92d18613fdfb73c8a23/userdata/shm DeviceMajor:0 DeviceMinor:391 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-661 DeviceMajor:0 DeviceMinor:661 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a01c92f5-7938-437d-8262-11598bd8023c/volumes/kubernetes.io~projected/kube-api-access-qc69w DeviceMajor:0 DeviceMinor:248 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-741 DeviceMajor:0 DeviceMinor:741 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7a951627-c032-4846-821c-c4bcbf4a91b9/volumes/kubernetes.io~projected/kube-api-access-wxn4v DeviceMajor:0 DeviceMinor:836 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/17adbc1a-f29c-4278-b29a-0cc3879b753f/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:988 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1022 DeviceMajor:0 DeviceMinor:1022 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/681658b0b14bf79757ec7e2bf815ef5e737aa8b1612a9d7bf59a35cb9f00495b/userdata/shm DeviceMajor:0 DeviceMinor:356 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e4d0b174-33e4-46ee-863b-b5cc2a271b85/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:98 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-300 DeviceMajor:0 DeviceMinor:300 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0213214b-693b-411b-8254-48d7826011eb/volumes/kubernetes.io~projected/kube-api-access-xcm8d DeviceMajor:0 DeviceMinor:226 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2e0fa133-60e7-47d0-996e-7e85aef2a218/volumes/kubernetes.io~projected/kube-api-access-7rccw DeviceMajor:0 DeviceMinor:96 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6ed4f640-d481-4e7a-92eb-f0eda82e138c/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1077 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-410 DeviceMajor:0 DeviceMinor:410 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-508 DeviceMajor:0 DeviceMinor:508 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-340 DeviceMajor:0 DeviceMinor:340 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f83db5e28df12811765a35c5caa63df9e480be2ff8b0922b566cffc66ed3f105/userdata/shm DeviceMajor:0 DeviceMinor:993 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/83a4f641-d28f-42aa-a228-f6086d720fe4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/234a5a6c-3790-49d0-b1e7-86f81048d96a/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:454 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fe76db3e18ee08aeb5e379f2dbbf7788ff4131f5c2267fbb53a962d2c960a57b/userdata/shm DeviceMajor:0 DeviceMinor:502 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b79758b7-9129-496c-abec-80d455648454/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1127 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-533 DeviceMajor:0 DeviceMinor:533 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-336 DeviceMajor:0 DeviceMinor:336 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-506 DeviceMajor:0 DeviceMinor:506 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4671673d-afa0-481f-b3a2-2c2b9441b6ce/volumes/kubernetes.io~projected/kube-api-access-d7jz6 DeviceMajor:0 DeviceMinor:596 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-369 DeviceMajor:0 DeviceMinor:369 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-754 DeviceMajor:0 DeviceMinor:754 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-576 DeviceMajor:0 DeviceMinor:576 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1109 DeviceMajor:0 DeviceMinor:1109 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4671673d-afa0-481f-b3a2-2c2b9441b6ce/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:643 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c707555fde8547aced9e196e247cc976f2be6c845c60b160a99fcce91955e9be/userdata/shm DeviceMajor:0 DeviceMinor:115 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0e106377d9d72c29f7e269aa5cfc10e2e71a7440e3f167ac189e9be6ef45a160/userdata/shm DeviceMajor:0 DeviceMinor:253 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-323 DeviceMajor:0 DeviceMinor:323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b4561154d0b6ba0fd61becf7cb0b78f50d8ad270a32afdea4927372423c86f1f/userdata/shm DeviceMajor:0 DeviceMinor:452 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-355 DeviceMajor:0 DeviceMinor:355 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7d26e6fa0dd7672f718650a12401a8514bc9d4479825421f550493d0cc0ccae9/userdata/shm DeviceMajor:0 DeviceMinor:99 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-344 DeviceMajor:0 DeviceMinor:344 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0213214b-693b-411b-8254-48d7826011eb/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b71043687eba73124ca20af7839f57eeabe61687cf875f84c32f9f4a213acec8/userdata/shm DeviceMajor:0 DeviceMinor:249 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-296 DeviceMajor:0 DeviceMinor:296 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-578 DeviceMajor:0 DeviceMinor:578 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-728 DeviceMajor:0 DeviceMinor:728 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-480 DeviceMajor:0 DeviceMinor:480 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/46ae7b31-c91c-477e-a04a-a3a8541747be/volumes/kubernetes.io~projected/kube-api-access-zwsns DeviceMajor:0 DeviceMinor:114 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-603 DeviceMajor:0 DeviceMinor:603 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-966 DeviceMajor:0 DeviceMinor:966 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-45 DeviceMajor:0 DeviceMinor:45 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/83a4f641-d28f-42aa-a228-f6086d720fe4/volumes/kubernetes.io~projected/kube-api-access-9hb2q DeviceMajor:0 DeviceMinor:224 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-517 DeviceMajor:0 DeviceMinor:517 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f66902e008f5e3816231ec2d4e1a0e85eeb3453ed6e4f6ce1b4d241b3bf8e3ac/userdata/shm DeviceMajor:0 DeviceMinor:784 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a/volumes/kubernetes.io~projected/kube-api-access-5dvd5 DeviceMajor:0 DeviceMinor:845 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-949 DeviceMajor:0 DeviceMinor:949 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1ad93612-ab12-4b30-984f-119e1b924a84/volumes/kubernetes.io~projected/kube-api-access-xzldt DeviceMajor:0 DeviceMinor:458 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bc9af4af-fb39-4a51-83ae-dab3f1d159f2/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1168 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-1136 DeviceMajor:0 DeviceMinor:1136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-416 DeviceMajor:0 DeviceMinor:416 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/49ae7d75896e0c278ca4b4cb9c4f8b076e025d8e605f566c7b21c0b8fb8bc3f7/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-704 DeviceMajor:0 DeviceMinor:704 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d2cf9274-25d2-4576-bbef-1d416dfff0a9/volumes/kubernetes.io~projected/kube-api-access-vljm6 DeviceMajor:0 DeviceMinor:759 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4bc77989-ecfc-4500-92a0-18c2b3b78408/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:125 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:439 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-522 DeviceMajor:0 DeviceMinor:522 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-451 DeviceMajor:0 DeviceMinor:451 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/02879f34-7062-4f07-9a5a-f965103d9182/volumes/kubernetes.io~projected/kube-api-access-jbv4l DeviceMajor:0 DeviceMinor:1047 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/369e9689-e2f6-4276-b096-8db094f8d6ae/volumes/kubernetes.io~projected/kube-api-access-crbvx DeviceMajor:0 DeviceMinor:227 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/baeb6380-95e4-4e10-9798-e1e22f20bade/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:465 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/65cfa12a-0711-4fba-8859-73a3f8f250a9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:688 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-796 DeviceMajor:0 DeviceMinor:796 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/27fd370e185ff896bf0edc768c087bcfed286fcd2920b469bb1b45967f2d7e8e/userdata/shm DeviceMajor:0 DeviceMinor:1014 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1090 DeviceMajor:0 DeviceMinor:1090 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-350 DeviceMajor:0 DeviceMinor:350 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-500 DeviceMajor:0 DeviceMinor:500 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:215 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c9a9baa5-9334-47dc-8d0c-eafc96a679b3/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bc0482eb4be6db452db71a2c46c144f5403bf6de42eee4937dbcaa45ae804557/userdata/shm DeviceMajor:0 DeviceMinor:493 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/734f9f10-5bde-44d5-a831-021b93fd667d/volumes/kubernetes.io~projected/kube-api-access-mq596 DeviceMajor:0 DeviceMinor:860 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1062 DeviceMajor:0 DeviceMinor:1062 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1011 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/36db10b8-33a2-4b54-85e2-9809eb6bc37d/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:455 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/933a37fd-d76a-4f60-8dd8-301fb73daf42/volumes/kubernetes.io~projected/kube-api-access-5w454 DeviceMajor:0 DeviceMinor:732 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7e309570-09d0-412a-a74b-c5397d048a30/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:828 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7e309570-09d0-412a-a74b-c5397d048a30/volumes/kubernetes.io~projected/kube-api-access-mcfq7 DeviceMajor:0 DeviceMinor:835 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:129 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95165b81eb17d7c2c28d6429f46259466ca6d0bdd237f4679d2704ef98282f29/userdata/shm DeviceMajor:0 DeviceMinor:272 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/087a43ea54c2e2fbe1816f2c58b08071e419e9384fd7fc0a1f0284ded4111e9a/userdata/shm DeviceMajor:0 DeviceMinor:934 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ebe459df-4be3-4a73-a061-5d2c637f57be/volumes/kubernetes.io~projected/kube-api-access-fqxgz DeviceMajor:0 DeviceMinor:1013 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f46be36805215ed01ce43e16395e2577e1a093a401197fa4f4e250af1a9fdef6/userdata/shm DeviceMajor:0 DeviceMinor:1020 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-460 DeviceMajor:0 DeviceMinor:460 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-42 DeviceMajor:0 DeviceMinor:42 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/317a89ea-e9dd-4167-8568-bb36e2431015/volumes/kubernetes.io~projected/kube-api-access-nllws DeviceMajor:0 DeviceMinor:95 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b8a26d81f44c36716262661566f2f3e96301ba61c1175262d41d795c78a4ddc7/userdata/shm DeviceMajor:0 DeviceMinor:333 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-429 DeviceMajor:0 DeviceMinor:429 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-290 DeviceMajor:0 DeviceMinor:290 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-421 DeviceMajor:0 DeviceMinor:421 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/20dc979a-732b-43b5-acc2-118e4c350470/volumes/kubernetes.io~projected/kube-api-access-wnvfd DeviceMajor:0 DeviceMinor:164 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1bf0ea4e-8b08-488f-b252-39580f46b756/volumes/kubernetes.io~projected/kube-api-access-4mlkj DeviceMajor:0 DeviceMinor:231 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-594 DeviceMajor:0 DeviceMinor:594 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-631 DeviceMajor:0 DeviceMinor:631 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0679fb9ef1dd358deba35d738ff1064e3cdf869b26696ba0d14a1ac6ad26f588/userdata/shm DeviceMajor:0 DeviceMinor:730 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dfe654b41556fae7663227362582c9c8b439e29f071dbdc91344f393aa640b68/userdata/shm DeviceMajor:0 DeviceMinor:1082 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4086d06f-d50e-4632-9da7-508909429eef/volumes/kubernetes.io~projected/kube-api-access-w4lx2 DeviceMajor:0 DeviceMinor:105 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/933a37fd-d76a-4f60-8dd8-301fb73daf42/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:731 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bd033b5b-af07-4e69-9a5c-46f7c9bde95a/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:837 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ab383e41a32fa393eda648ad2e2329488744a0ae30fb174d7a553710f1fa274a/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-282 DeviceMajor:0 DeviceMinor:282 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-523 DeviceMajor:0 DeviceMinor:523 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-664 DeviceMajor:0 DeviceMinor:664 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0679fb9ef1dd358 MacAddress:26:76:5c:6c:70:47 Speed:10000 Mtu:8900} {Name:0dfd132ca6d17d7 MacAddress:ba:8e:61:02:7f:7a Speed:10000 Mtu:8900} {Name:0e106377d9d72c2 MacAddress:ca:f1:6d:36:7c:a5 Speed:10000 Mtu:8900} {Name:0e5aebf642fb9f9 MacAddress:4a:c7:c1:ab:7d:61 Speed:10000 Mtu:8900} {Name:13d61ed6ba86dc9 MacAddress:2e:a4:67:ba:ff:d9 Speed:10000 Mtu:8900} {Name:169cee91f89c2bf MacAddress:82:8a:6f:a3:a9:1c Speed:10000 Mtu:8900} {Name:1a378c5d4531131 MacAddress:f2:c3:7c:72:23:14 Speed:10000 Mtu:8900} {Name:1c625ab74e01dd5 MacAddress:8e:9e:d9:90:46:73 Speed:10000 Mtu:8900} {Name:1f5a6ee5a82f28e MacAddress:16:c9:97:5c:7d:f7 Speed:10000 Mtu:8900} {Name:22211cbad9660f7 MacAddress:1e:d2:8e:76:a3:1c Speed:10000 Mtu:8900} {Name:27fd370e185ff89 MacAddress:22:b9:da:b7:3d:82 Speed:10000 Mtu:8900} {Name:2bf4b712cae2c0e MacAddress:c6:59:ce:61:ee:96 Speed:10000 Mtu:8900} {Name:2fb5e5e8607f93d MacAddress:be:e7:50:e3:0a:d7 Speed:10000 Mtu:8900} {Name:350c4fb60f4e9bd MacAddress:e6:64:28:b1:4f:42 Speed:10000 Mtu:8900} {Name:513bdda53b682c9 MacAddress:d6:20:3c:4d:f7:d8 Speed:10000 Mtu:8900} {Name:55d7b7fe63240a7 MacAddress:62:55:c8:c8:86:06 Speed:10000 Mtu:8900} {Name:6521ed821b17aca MacAddress:92:d5:4f:6e:ca:4a Speed:10000 Mtu:8900} {Name:681658b0b14bf79 MacAddress:c6:e5:d8:c9:1b:a8 Speed:10000 Mtu:8900} {Name:6c7a102b9c64081 MacAddress:12:db:9e:20:cb:a9 Speed:10000 Mtu:8900} {Name:6f29c4b1c1fd218 MacAddress:0e:b9:39:2f:cf:30 Speed:10000 Mtu:8900} {Name:7dfbe5ed23f58a4 MacAddress:fa:af:ee:0a:fe:07 Speed:10000 Mtu:8900} {Name:7f19ee16fbfcf73 MacAddress:4a:f2:6d:a0:4e:23 Speed:10000 Mtu:8900} {Name:8207c4419d89bbe MacAddress:12:1d:f5:93:40:72 Speed:10000 Mtu:8900} {Name:8385307c04cfef1 MacAddress:2e:b9:68:9e:76:8d Speed:10000 Mtu:8900} {Name:880004505fafdd7 MacAddress:be:df:b3:56:b0:f6 Speed:10000 Mtu:8900} {Name:8c177b73cce0c7f MacAddress:52:fa:58:e4:0b:f0 Speed:10000 Mtu:8900} {Name:8cfa9195fd91aaa MacAddress:da:a5:e5:c6:9c:14 Speed:10000 Mtu:8900} {Name:8eb2fe8ff8be73a MacAddress:6e:1c:29:c0:1c:7c Speed:10000 Mtu:8900} {Name:93c3e972c1d72b8 MacAddress:be:1e:9e:14:3e:6f Speed:10000 Mtu:8900} {Name:95165b81eb17d7c MacAddress:66:49:9d:1f:e7:c1 Speed:10000 Mtu:8900} {Name:a2f9634bc26fc41 MacAddress:16:2b:f0:7d:56:3d Speed:10000 Mtu:8900} {Name:a44d9eb65400a7e MacAddress:06:8e:c4:6a:ed:e6 Speed:10000 Mtu:8900} {Name:a890ba92b025096 MacAddress:ae:2d:57:b6:78:99 Speed:10000 Mtu:8900} {Name:ae165efde01e25d MacAddress:1a:b5:86:80:aa:74 Speed:10000 Mtu:8900} {Name:b4561154d0b6ba0 MacAddress:7a:23:cb:74:5b:8b Speed:10000 Mtu:8900} {Name:b71043687eba731 MacAddress:b6:f6:30:90:42:c4 Speed:10000 Mtu:8900} {Name:bc0482eb4be6db4 MacAddress:62:59:03:7b:36:7e Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:72:5b:82:b7:84:c5 Speed:0 Mtu:8900} {Name:c0d9adef366d9f4 MacAddress:4a:e6:76:05:3f:d9 Speed:10000 Mtu:8900} {Name:c2ec5cc34fdd356 MacAddress:66:6f:6c:f8:73:d0 Speed:10000 Mtu:8900} {Name:c7de43cf6bf0c5d MacAddress:da:59:a6:14:f3:8a Speed:10000 Mtu:8900} {Name:cad2dea033992ed MacAddress:aa:87:c9:c3:c6:3c Speed:10000 Mtu:8900} {Name:d2e64e1e8754957 MacAddress:02:63:69:57:f8:3d Speed:10000 Mtu:8900} {Name:d36791810cb2ff2 MacAddress:86:af:21:b9:cc:e1 Speed:10000 Mtu:8900} {Name:d74caa04ea7449a MacAddress:56:d8:94:fe:64:89 Speed:10000 Mtu:8900} {Name:d7625d2cd327e3c MacAddress:e6:55:55:30:bb:86 Speed:10000 Mtu:8900} {Name:dfe654b41556fae MacAddress:92:5c:cf:49:32:27 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:25:c2:a7 Speed:-1 Mtu:9000} {Name:f257d90986f3bc5 MacAddress:32:f8:6e:78:4e:76 Speed:10000 Mtu:8900} {Name:f3d8252ff99e6f3 MacAddress:1a:9f:85:20:ad:69 Speed:10000 Mtu:8900} {Name:f46be36805215ed MacAddress:ea:f5:db:bc:df:d2 Speed:10000 Mtu:8900} {Name:f66902e008f5e38 MacAddress:22:60:73:93:bc:24 Speed:10000 Mtu:8900} {Name:f83db5e28df1281 MacAddress:da:06:20:47:78:5b Speed:10000 Mtu:8900} {Name:f8cc997e3f27ce3 MacAddress:ea:fc:c5:a8:d1:32 Speed:10000 Mtu:8900} {Name:fac381b9cc8f57c MacAddress:0a:07:a5:f4:05:00 Speed:10000 Mtu:8900} {Name:fe76db3e18ee08a MacAddress:ae:ad:60:c3:73:2f Speed:10000 Mtu:8900} {Name:feef592bfb9171a MacAddress:36:61:5d:66:dc:f8 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:12:bd:01:20:1c:b1 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 13:23:42.679536 master-0 kubenswrapper[28504]: I0318 13:23:42.678886 28504 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 13:23:42.679536 master-0 kubenswrapper[28504]: I0318 13:23:42.678998 28504 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 13:23:42.679536 master-0 kubenswrapper[28504]: I0318 13:23:42.679282 28504 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 13:23:42.679536 master-0 kubenswrapper[28504]: I0318 13:23:42.679435 28504 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 13:23:42.679877 master-0 kubenswrapper[28504]: I0318 13:23:42.679460 28504 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 13:23:42.679877 master-0 kubenswrapper[28504]: I0318 13:23:42.679648 28504 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 13:23:42.679877 master-0 kubenswrapper[28504]: I0318 13:23:42.679658 28504 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 13:23:42.679877 master-0 kubenswrapper[28504]: I0318 13:23:42.679666 28504 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 13:23:42.679877 master-0 kubenswrapper[28504]: I0318 13:23:42.679687 28504 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 13:23:42.679877 master-0 kubenswrapper[28504]: I0318 13:23:42.679720 28504 state_mem.go:36] "Initialized new in-memory state store" Mar 18 13:23:42.679877 master-0 kubenswrapper[28504]: I0318 13:23:42.679802 28504 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 13:23:42.679877 master-0 kubenswrapper[28504]: I0318 13:23:42.679859 28504 kubelet.go:418] "Attempting to sync node with API server" Mar 18 13:23:42.679877 master-0 kubenswrapper[28504]: I0318 13:23:42.679870 28504 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 13:23:42.679877 master-0 kubenswrapper[28504]: I0318 13:23:42.679883 28504 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 13:23:42.680595 master-0 kubenswrapper[28504]: I0318 13:23:42.679893 28504 kubelet.go:324] "Adding apiserver pod source" Mar 18 13:23:42.680595 master-0 kubenswrapper[28504]: I0318 13:23:42.679908 28504 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 13:23:42.682760 master-0 kubenswrapper[28504]: I0318 13:23:42.682359 28504 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 13:23:42.682760 master-0 kubenswrapper[28504]: I0318 13:23:42.682555 28504 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 18 13:23:42.682895 master-0 kubenswrapper[28504]: I0318 13:23:42.682804 28504 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 13:23:42.683047 master-0 kubenswrapper[28504]: I0318 13:23:42.682981 28504 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 13:23:42.683047 master-0 kubenswrapper[28504]: I0318 13:23:42.683015 28504 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 13:23:42.683047 master-0 kubenswrapper[28504]: I0318 13:23:42.683031 28504 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 13:23:42.683047 master-0 kubenswrapper[28504]: I0318 13:23:42.683041 28504 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 13:23:42.683047 master-0 kubenswrapper[28504]: I0318 13:23:42.683049 28504 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 13:23:42.683047 master-0 kubenswrapper[28504]: I0318 13:23:42.683055 28504 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 13:23:42.684082 master-0 kubenswrapper[28504]: I0318 13:23:42.683063 28504 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 13:23:42.684082 master-0 kubenswrapper[28504]: I0318 13:23:42.683069 28504 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 13:23:42.684082 master-0 kubenswrapper[28504]: I0318 13:23:42.683076 28504 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 13:23:42.684082 master-0 kubenswrapper[28504]: I0318 13:23:42.683083 28504 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 13:23:42.684082 master-0 kubenswrapper[28504]: I0318 13:23:42.683092 28504 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 13:23:42.684082 master-0 kubenswrapper[28504]: I0318 13:23:42.683105 28504 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 13:23:42.684082 master-0 kubenswrapper[28504]: I0318 13:23:42.683143 28504 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 13:23:42.684082 master-0 kubenswrapper[28504]: I0318 13:23:42.683568 28504 server.go:1280] "Started kubelet" Mar 18 13:23:42.684082 master-0 kubenswrapper[28504]: I0318 13:23:42.683767 28504 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 13:23:42.684082 master-0 kubenswrapper[28504]: I0318 13:23:42.683832 28504 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 13:23:42.684082 master-0 kubenswrapper[28504]: I0318 13:23:42.683921 28504 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 13:23:42.684616 master-0 kubenswrapper[28504]: I0318 13:23:42.684350 28504 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 13:23:42.684272 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 13:23:42.687774 master-0 kubenswrapper[28504]: I0318 13:23:42.687556 28504 server.go:449] "Adding debug handlers to kubelet server" Mar 18 13:23:42.697746 master-0 kubenswrapper[28504]: E0318 13:23:42.697694 28504 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 18 13:23:42.703788 master-0 kubenswrapper[28504]: I0318 13:23:42.702576 28504 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 13:23:42.704042 master-0 kubenswrapper[28504]: I0318 13:23:42.704028 28504 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 13:23:42.704144 master-0 kubenswrapper[28504]: I0318 13:23:42.704077 28504 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 12:57:27 +0000 UTC, rotation deadline is 2026-03-19 07:17:52.984715693 +0000 UTC Mar 18 13:23:42.704144 master-0 kubenswrapper[28504]: I0318 13:23:42.704138 28504 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h54m10.280580258s for next certificate rotation Mar 18 13:23:42.704249 master-0 kubenswrapper[28504]: I0318 13:23:42.704174 28504 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 13:23:42.704249 master-0 kubenswrapper[28504]: I0318 13:23:42.704183 28504 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 13:23:42.704390 master-0 kubenswrapper[28504]: E0318 13:23:42.704351 28504 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:23:42.705049 master-0 kubenswrapper[28504]: I0318 13:23:42.705020 28504 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 13:23:42.710732 master-0 kubenswrapper[28504]: I0318 13:23:42.710501 28504 factory.go:55] Registering systemd factory Mar 18 13:23:42.710732 master-0 kubenswrapper[28504]: I0318 13:23:42.710540 28504 factory.go:221] Registration of the systemd container factory successfully Mar 18 13:23:42.713216 master-0 kubenswrapper[28504]: I0318 13:23:42.711410 28504 factory.go:153] Registering CRI-O factory Mar 18 13:23:42.713216 master-0 kubenswrapper[28504]: I0318 13:23:42.711428 28504 factory.go:221] Registration of the crio container factory successfully Mar 18 13:23:42.713216 master-0 kubenswrapper[28504]: I0318 13:23:42.711551 28504 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 13:23:42.713216 master-0 kubenswrapper[28504]: I0318 13:23:42.711574 28504 factory.go:103] Registering Raw factory Mar 18 13:23:42.713216 master-0 kubenswrapper[28504]: I0318 13:23:42.711594 28504 manager.go:1196] Started watching for new ooms in manager Mar 18 13:23:42.713216 master-0 kubenswrapper[28504]: I0318 13:23:42.712151 28504 manager.go:319] Starting recovery of all containers Mar 18 13:23:42.716365 master-0 kubenswrapper[28504]: I0318 13:23:42.716256 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2385db6b-4286-4839-822c-aa9c52290172" volumeName="kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-images" seLinuxMountContext="" Mar 18 13:23:42.716427 master-0 kubenswrapper[28504]: I0318 13:23:42.716376 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a039fc2-b0af-4b2c-a884-1c274c08064d" volumeName="kubernetes.io/projected/3a039fc2-b0af-4b2c-a884-1c274c08064d-kube-api-access-pmmhd" seLinuxMountContext="" Mar 18 13:23:42.716427 master-0 kubenswrapper[28504]: I0318 13:23:42.716390 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4086d06f-d50e-4632-9da7-508909429eef" volumeName="kubernetes.io/projected/4086d06f-d50e-4632-9da7-508909429eef-kube-api-access-w4lx2" seLinuxMountContext="" Mar 18 13:23:42.716514 master-0 kubenswrapper[28504]: I0318 13:23:42.716482 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65cfa12a-0711-4fba-8859-73a3f8f250a9" volumeName="kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-client-ca" seLinuxMountContext="" Mar 18 13:23:42.716514 master-0 kubenswrapper[28504]: I0318 13:23:42.716495 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="93ea3c78-dede-468f-89a5-551133f794c5" volumeName="kubernetes.io/secret/93ea3c78-dede-468f-89a5-551133f794c5-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.716514 master-0 kubenswrapper[28504]: I0318 13:23:42.716510 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16a930da-d793-486f-bcef-cf042d3c427d" volumeName="kubernetes.io/empty-dir/16a930da-d793-486f-bcef-cf042d3c427d-operand-assets" seLinuxMountContext="" Mar 18 13:23:42.716613 master-0 kubenswrapper[28504]: I0318 13:23:42.716536 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e0fa133-60e7-47d0-996e-7e85aef2a218" volumeName="kubernetes.io/projected/2e0fa133-60e7-47d0-996e-7e85aef2a218-kube-api-access-7rccw" seLinuxMountContext="" Mar 18 13:23:42.716613 master-0 kubenswrapper[28504]: I0318 13:23:42.716548 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="35d8f08f-4c57-44e0-8e8f-3969287e2a5a" volumeName="kubernetes.io/projected/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-kube-api-access-q6d7j" seLinuxMountContext="" Mar 18 13:23:42.716613 master-0 kubenswrapper[28504]: I0318 13:23:42.716563 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73c93ee3-cf14-4fea-b2a7-ccfb56e55be4" volumeName="kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-kube-api-access-h8v5n" seLinuxMountContext="" Mar 18 13:23:42.716613 master-0 kubenswrapper[28504]: I0318 13:23:42.716590 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ce8e99d-7b02-4bf4-a438-adde851918cb" volumeName="kubernetes.io/projected/8ce8e99d-7b02-4bf4-a438-adde851918cb-kube-api-access-r8dfw" seLinuxMountContext="" Mar 18 13:23:42.716613 master-0 kubenswrapper[28504]: I0318 13:23:42.716603 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a" volumeName="kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-etcd-serving-ca" seLinuxMountContext="" Mar 18 13:23:42.716748 master-0 kubenswrapper[28504]: I0318 13:23:42.716636 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a01c92f5-7938-437d-8262-11598bd8023c" volumeName="kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-images" seLinuxMountContext="" Mar 18 13:23:42.716748 master-0 kubenswrapper[28504]: I0318 13:23:42.716647 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f16e797-a619-46a8-948a-9fdfc8a9891f" volumeName="kubernetes.io/empty-dir/0f16e797-a619-46a8-948a-9fdfc8a9891f-tmp" seLinuxMountContext="" Mar 18 13:23:42.716748 master-0 kubenswrapper[28504]: I0318 13:23:42.716661 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a01c92f5-7938-437d-8262-11598bd8023c" volumeName="kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert" seLinuxMountContext="" Mar 18 13:23:42.716748 master-0 kubenswrapper[28504]: I0318 13:23:42.716673 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ed4f640-d481-4e7a-92eb-f0eda82e138c" volumeName="kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-tls" seLinuxMountContext="" Mar 18 13:23:42.716748 master-0 kubenswrapper[28504]: I0318 13:23:42.716683 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17adbc1a-f29c-4278-b29a-0cc3879b753f" volumeName="kubernetes.io/configmap/17adbc1a-f29c-4278-b29a-0cc3879b753f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 18 13:23:42.716748 master-0 kubenswrapper[28504]: I0318 13:23:42.716693 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3c0d0048-6d96-459c-8742-2f092af44a6a" volumeName="kubernetes.io/configmap/3c0d0048-6d96-459c-8742-2f092af44a6a-metrics-client-ca" seLinuxMountContext="" Mar 18 13:23:42.716748 master-0 kubenswrapper[28504]: I0318 13:23:42.716743 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ca94153-9d1a-4b0a-a3eb-556e85f2e875" volumeName="kubernetes.io/projected/9ca94153-9d1a-4b0a-a3eb-556e85f2e875-kube-api-access-hbksj" seLinuxMountContext="" Mar 18 13:23:42.717010 master-0 kubenswrapper[28504]: I0318 13:23:42.716759 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2e2ef3a-a6e9-44dc-93c7-9f533e75502a" volumeName="kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-images" seLinuxMountContext="" Mar 18 13:23:42.717010 master-0 kubenswrapper[28504]: I0318 13:23:42.716776 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0213214b-693b-411b-8254-48d7826011eb" volumeName="kubernetes.io/projected/0213214b-693b-411b-8254-48d7826011eb-kube-api-access-xcm8d" seLinuxMountContext="" Mar 18 13:23:42.717010 master-0 kubenswrapper[28504]: I0318 13:23:42.716816 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ce8e99d-7b02-4bf4-a438-adde851918cb" volumeName="kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-trusted-ca-bundle" seLinuxMountContext="" Mar 18 13:23:42.717010 master-0 kubenswrapper[28504]: I0318 13:23:42.716831 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ed4f640-d481-4e7a-92eb-f0eda82e138c" volumeName="kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 13:23:42.717010 master-0 kubenswrapper[28504]: I0318 13:23:42.716843 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a0944d2-d99a-42eb-81f5-a212b750b8f4" volumeName="kubernetes.io/projected/8a0944d2-d99a-42eb-81f5-a212b750b8f4-kube-api-access-882b8" seLinuxMountContext="" Mar 18 13:23:42.717010 master-0 kubenswrapper[28504]: I0318 13:23:42.716864 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5a93d05-3c8e-4666-9a55-d8f9e902db78" volumeName="kubernetes.io/projected/a5a93d05-3c8e-4666-9a55-d8f9e902db78-kube-api-access-mthwt" seLinuxMountContext="" Mar 18 13:23:42.717010 master-0 kubenswrapper[28504]: I0318 13:23:42.716886 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b79758b7-9129-496c-abec-80d455648454" volumeName="kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-metrics-server-audit-profiles" seLinuxMountContext="" Mar 18 13:23:42.717010 master-0 kubenswrapper[28504]: I0318 13:23:42.716906 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2cf9274-25d2-4576-bbef-1d416dfff0a9" volumeName="kubernetes.io/empty-dir/d2cf9274-25d2-4576-bbef-1d416dfff0a9-utilities" seLinuxMountContext="" Mar 18 13:23:42.717010 master-0 kubenswrapper[28504]: I0318 13:23:42.717002 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2cf9274-25d2-4576-bbef-1d416dfff0a9" volumeName="kubernetes.io/projected/d2cf9274-25d2-4576-bbef-1d416dfff0a9-kube-api-access-vljm6" seLinuxMountContext="" Mar 18 13:23:42.717276 master-0 kubenswrapper[28504]: I0318 13:23:42.717029 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee1eb80b-5a76-443f-a534-54d5bdc0c98a" volumeName="kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 18 13:23:42.717276 master-0 kubenswrapper[28504]: I0318 13:23:42.717104 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a715e53-1874-4993-93d1-504c3470a6f5" volumeName="kubernetes.io/configmap/5a715e53-1874-4993-93d1-504c3470a6f5-metrics-client-ca" seLinuxMountContext="" Mar 18 13:23:42.717276 master-0 kubenswrapper[28504]: I0318 13:23:42.717116 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a" volumeName="kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-audit-policies" seLinuxMountContext="" Mar 18 13:23:42.717276 master-0 kubenswrapper[28504]: I0318 13:23:42.717129 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b856d226-a137-4954-82c5-5929d579387a" volumeName="kubernetes.io/empty-dir/b856d226-a137-4954-82c5-5929d579387a-node-exporter-textfile" seLinuxMountContext="" Mar 18 13:23:42.717276 master-0 kubenswrapper[28504]: I0318 13:23:42.717138 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2f2982b-2117-4c16-a4e3-f7e14c7ddc41" volumeName="kubernetes.io/secret/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.717276 master-0 kubenswrapper[28504]: I0318 13:23:42.717149 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad580a2-7f58-4d66-adad-0a53d9777655" volumeName="kubernetes.io/configmap/1ad580a2-7f58-4d66-adad-0a53d9777655-config" seLinuxMountContext="" Mar 18 13:23:42.717276 master-0 kubenswrapper[28504]: I0318 13:23:42.717171 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="330df925-8429-4b96-9bfe-caa017c21afa" volumeName="kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics" seLinuxMountContext="" Mar 18 13:23:42.717276 master-0 kubenswrapper[28504]: I0318 13:23:42.717181 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b41c9132-92ef-429d-bdd5-9bdb024e04fc" volumeName="kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-encryption-config" seLinuxMountContext="" Mar 18 13:23:42.717276 master-0 kubenswrapper[28504]: I0318 13:23:42.717221 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebe459df-4be3-4a73-a061-5d2c637f57be" volumeName="kubernetes.io/projected/ebe459df-4be3-4a73-a061-5d2c637f57be-kube-api-access-fqxgz" seLinuxMountContext="" Mar 18 13:23:42.717500 master-0 kubenswrapper[28504]: I0318 13:23:42.717280 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20dc979a-732b-43b5-acc2-118e4c350470" volumeName="kubernetes.io/secret/20dc979a-732b-43b5-acc2-118e4c350470-ovn-node-metrics-cert" seLinuxMountContext="" Mar 18 13:23:42.717500 master-0 kubenswrapper[28504]: I0318 13:23:42.717300 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5a93d05-3c8e-4666-9a55-d8f9e902db78" volumeName="kubernetes.io/secret/a5a93d05-3c8e-4666-9a55-d8f9e902db78-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.717500 master-0 kubenswrapper[28504]: I0318 13:23:42.717325 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2e2ef3a-a6e9-44dc-93c7-9f533e75502a" volumeName="kubernetes.io/projected/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-kube-api-access-5dvd5" seLinuxMountContext="" Mar 18 13:23:42.717500 master-0 kubenswrapper[28504]: I0318 13:23:42.717350 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb8907fd-35dd-452a-8032-f2f95a6e553a" volumeName="kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-env-overrides" seLinuxMountContext="" Mar 18 13:23:42.717500 master-0 kubenswrapper[28504]: I0318 13:23:42.717365 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fa6920b-f7d9-4758-bba9-356a2c8b1b67" volumeName="kubernetes.io/configmap/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-cco-trusted-ca" seLinuxMountContext="" Mar 18 13:23:42.717500 master-0 kubenswrapper[28504]: I0318 13:23:42.717397 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a715e53-1874-4993-93d1-504c3470a6f5" volumeName="kubernetes.io/projected/5a715e53-1874-4993-93d1-504c3470a6f5-kube-api-access-99mks" seLinuxMountContext="" Mar 18 13:23:42.717500 master-0 kubenswrapper[28504]: I0318 13:23:42.717412 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2c4572e-0b38-4db1-96e5-6a35e29048e7" volumeName="kubernetes.io/projected/c2c4572e-0b38-4db1-96e5-6a35e29048e7-kube-api-access" seLinuxMountContext="" Mar 18 13:23:42.717500 master-0 kubenswrapper[28504]: I0318 13:23:42.717427 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2cf9274-25d2-4576-bbef-1d416dfff0a9" volumeName="kubernetes.io/empty-dir/d2cf9274-25d2-4576-bbef-1d416dfff0a9-catalog-content" seLinuxMountContext="" Mar 18 13:23:42.717500 master-0 kubenswrapper[28504]: I0318 13:23:42.717446 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="330df925-8429-4b96-9bfe-caa017c21afa" volumeName="kubernetes.io/configmap/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-trusted-ca" seLinuxMountContext="" Mar 18 13:23:42.717500 master-0 kubenswrapper[28504]: I0318 13:23:42.717502 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da6a763d-2777-40c4-ae1f-c77ced406ea2" volumeName="kubernetes.io/projected/da6a763d-2777-40c4-ae1f-c77ced406ea2-kube-api-access-lhqk9" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717517 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b41c9132-92ef-429d-bdd5-9bdb024e04fc" volumeName="kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-audit" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717527 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c074751c-6b3c-44df-aca5-42fa69662779" volumeName="kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-trusted-ca-bundle" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717540 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46ae7b31-c91c-477e-a04a-a3a8541747be" volumeName="kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-sysctl-allowlist" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717565 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3f208f9-e2e1-4fae-a47a-f58b722e0ad5" volumeName="kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-images" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717579 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16a930da-d793-486f-bcef-cf042d3c427d" volumeName="kubernetes.io/projected/16a930da-d793-486f-bcef-cf042d3c427d-kube-api-access-5gv8b" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717599 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="93ea3c78-dede-468f-89a5-551133f794c5" volumeName="kubernetes.io/projected/93ea3c78-dede-468f-89a5-551133f794c5-kube-api-access" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717639 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b41c9132-92ef-429d-bdd5-9bdb024e04fc" volumeName="kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-etcd-client" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717652 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b79758b7-9129-496c-abec-80d455648454" volumeName="kubernetes.io/projected/b79758b7-9129-496c-abec-80d455648454-kube-api-access-lpdw6" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717666 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9a9baa5-9334-47dc-8d0c-eafc96a679b3" volumeName="kubernetes.io/configmap/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-config" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717684 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf0ea4e-8b08-488f-b252-39580f46b756" volumeName="kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-client" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717707 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="734f9f10-5bde-44d5-a831-021b93fd667d" volumeName="kubernetes.io/projected/734f9f10-5bde-44d5-a831-021b93fd667d-kube-api-access-mq596" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717722 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c106be-27ea-4849-b365-eff6d25f5e71" volumeName="kubernetes.io/projected/f3c106be-27ea-4849-b365-eff6d25f5e71-kube-api-access-hthf8" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717732 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bccf60c-5b07-4f40-8430-12bfb62661c7" volumeName="kubernetes.io/projected/5bccf60c-5b07-4f40-8430-12bfb62661c7-kube-api-access-4b6rn" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717741 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c074751c-6b3c-44df-aca5-42fa69662779" volumeName="kubernetes.io/secret/c074751c-6b3c-44df-aca5-42fa69662779-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717757 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3f208f9-e2e1-4fae-a47a-f58b722e0ad5" volumeName="kubernetes.io/projected/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-kube-api-access-4djxt" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717772 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bc77989-ecfc-4500-92a0-18c2b3b78408" volumeName="kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-env-overrides" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717784 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e691486-8540-4b79-8eed-b0fb829071db" volumeName="kubernetes.io/projected/5e691486-8540-4b79-8eed-b0fb829071db-kube-api-access-lpl28" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717794 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65cfa12a-0711-4fba-8859-73a3f8f250a9" volumeName="kubernetes.io/secret/65cfa12a-0711-4fba-8859-73a3f8f250a9-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717802 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b79758b7-9129-496c-abec-80d455648454" volumeName="kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-server-tls" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717817 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="baeb6380-95e4-4e10-9798-e1e22f20bade" volumeName="kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-ca-certs" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717826 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf0ea4e-8b08-488f-b252-39580f46b756" volumeName="kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717836 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46ae7b31-c91c-477e-a04a-a3a8541747be" volumeName="kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-binary-copy" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717846 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a01c92f5-7938-437d-8262-11598bd8023c" volumeName="kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717855 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="375d5112-d2be-47cf-bee1-82614ba71ed8" volumeName="kubernetes.io/empty-dir/375d5112-d2be-47cf-bee1-82614ba71ed8-tmpfs" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717866 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2385db6b-4286-4839-822c-aa9c52290172" volumeName="kubernetes.io/projected/2385db6b-4286-4839-822c-aa9c52290172-kube-api-access-d27hr" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717875 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="35925474-e3fe-4cff-aad6-d853816618c7" volumeName="kubernetes.io/projected/35925474-e3fe-4cff-aad6-d853816618c7-kube-api-access-dzblt" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717906 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a951627-c032-4846-821c-c4bcbf4a91b9" volumeName="kubernetes.io/secret/7a951627-c032-4846-821c-c4bcbf4a91b9-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.717929 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f59a12b-d690-44c5-972c-fb4b0b5819f1" volumeName="kubernetes.io/projected/8f59a12b-d690-44c5-972c-fb4b0b5819f1-kube-api-access-8kpz5" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.718054 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a" volumeName="kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-etcd-client" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.718072 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5a93d05-3c8e-4666-9a55-d8f9e902db78" volumeName="kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-proxy-ca-bundles" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.718081 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17adbc1a-f29c-4278-b29a-0cc3879b753f" volumeName="kubernetes.io/projected/17adbc1a-f29c-4278-b29a-0cc3879b753f-kube-api-access-v6sr4" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.718093 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5a93d05-3c8e-4666-9a55-d8f9e902db78" volumeName="kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-client-ca" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.718102 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4671673d-afa0-481f-b3a2-2c2b9441b6ce" volumeName="kubernetes.io/configmap/4671673d-afa0-481f-b3a2-2c2b9441b6ce-config-volume" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.718113 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd033b5b-af07-4e69-9a5c-46f7c9bde95a" volumeName="kubernetes.io/secret/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-cert" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.718124 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4d0b174-33e4-46ee-863b-b5cc2a271b85" volumeName="kubernetes.io/secret/e4d0b174-33e4-46ee-863b-b5cc2a271b85-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.718144 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2385db6b-4286-4839-822c-aa9c52290172" volumeName="kubernetes.io/secret/2385db6b-4286-4839-822c-aa9c52290172-proxy-tls" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.718170 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36db10b8-33a2-4b54-85e2-9809eb6bc37d" volumeName="kubernetes.io/projected/36db10b8-33a2-4b54-85e2-9809eb6bc37d-kube-api-access-bkdqs" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.718184 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a715e53-1874-4993-93d1-504c3470a6f5" volumeName="kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-tls" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.718225 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ed4f640-d481-4e7a-92eb-f0eda82e138c" volumeName="kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-metrics-client-ca" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.718244 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fa6920b-f7d9-4758-bba9-356a2c8b1b67" volumeName="kubernetes.io/projected/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-kube-api-access-w9jhr" seLinuxMountContext="" Mar 18 13:23:42.718223 master-0 kubenswrapper[28504]: I0318 13:23:42.718255 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a" volumeName="kubernetes.io/projected/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-kube-api-access-7b29z" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718290 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad580a2-7f58-4d66-adad-0a53d9777655" volumeName="kubernetes.io/projected/1ad580a2-7f58-4d66-adad-0a53d9777655-kube-api-access-cw64j" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718320 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b41c9132-92ef-429d-bdd5-9bdb024e04fc" volumeName="kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-trusted-ca-bundle" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718360 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2c4572e-0b38-4db1-96e5-6a35e29048e7" volumeName="kubernetes.io/configmap/c2c4572e-0b38-4db1-96e5-6a35e29048e7-config" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718378 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2cad2401-dab1-49f7-870e-a742ebfe323f" volumeName="kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718390 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2c4572e-0b38-4db1-96e5-6a35e29048e7" volumeName="kubernetes.io/secret/c2c4572e-0b38-4db1-96e5-6a35e29048e7-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718405 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ce8e99d-7b02-4bf4-a438-adde851918cb" volumeName="kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-service-ca-bundle" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718476 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ed4f640-d481-4e7a-92eb-f0eda82e138c" volumeName="kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718490 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="933a37fd-d76a-4f60-8dd8-301fb73daf42" volumeName="kubernetes.io/secret/933a37fd-d76a-4f60-8dd8-301fb73daf42-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718508 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b41c9132-92ef-429d-bdd5-9bdb024e04fc" volumeName="kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-config" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718517 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b41c9132-92ef-429d-bdd5-9bdb024e04fc" volumeName="kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718541 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3f208f9-e2e1-4fae-a47a-f58b722e0ad5" volumeName="kubernetes.io/secret/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718550 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fa6920b-f7d9-4758-bba9-356a2c8b1b67" volumeName="kubernetes.io/secret/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718560 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf0ea4e-8b08-488f-b252-39580f46b756" volumeName="kubernetes.io/projected/1bf0ea4e-8b08-488f-b252-39580f46b756-kube-api-access-4mlkj" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718571 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="234a5a6c-3790-49d0-b1e7-86f81048d96a" volumeName="kubernetes.io/secret/234a5a6c-3790-49d0-b1e7-86f81048d96a-catalogserver-certs" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718580 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4671673d-afa0-481f-b3a2-2c2b9441b6ce" volumeName="kubernetes.io/projected/4671673d-afa0-481f-b3a2-2c2b9441b6ce-kube-api-access-d7jz6" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718591 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02879f34-7062-4f07-9a5a-f965103d9182" volumeName="kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-node-bootstrap-token" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718600 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a715e53-1874-4993-93d1-504c3470a6f5" volumeName="kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718631 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e691486-8540-4b79-8eed-b0fb829071db" volumeName="kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718644 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="83a4f641-d28f-42aa-a228-f6086d720fe4" volumeName="kubernetes.io/secret/83a4f641-d28f-42aa-a228-f6086d720fe4-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718658 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a" volumeName="kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-encryption-config" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718671 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd033b5b-af07-4e69-9a5c-46f7c9bde95a" volumeName="kubernetes.io/projected/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-kube-api-access-5475b" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718688 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02879f34-7062-4f07-9a5a-f965103d9182" volumeName="kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-certs" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718701 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b856d226-a137-4954-82c5-5929d579387a" volumeName="kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-tls" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718720 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="317a89ea-e9dd-4167-8568-bb36e2431015" volumeName="kubernetes.io/projected/317a89ea-e9dd-4167-8568-bb36e2431015-kube-api-access-nllws" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718731 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b79758b7-9129-496c-abec-80d455648454" volumeName="kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718753 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9a9baa5-9334-47dc-8d0c-eafc96a679b3" volumeName="kubernetes.io/projected/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-kube-api-access-z9tzl" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718767 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4d0b174-33e4-46ee-863b-b5cc2a271b85" volumeName="kubernetes.io/configmap/e4d0b174-33e4-46ee-863b-b5cc2a271b85-service-ca" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718779 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f16e797-a619-46a8-948a-9fdfc8a9891f" volumeName="kubernetes.io/projected/0f16e797-a619-46a8-948a-9fdfc8a9891f-kube-api-access-q6b9b" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718789 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="234a5a6c-3790-49d0-b1e7-86f81048d96a" volumeName="kubernetes.io/empty-dir/234a5a6c-3790-49d0-b1e7-86f81048d96a-cache" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718800 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="375d5112-d2be-47cf-bee1-82614ba71ed8" volumeName="kubernetes.io/secret/375d5112-d2be-47cf-bee1-82614ba71ed8-webhook-cert" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718809 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a0944d2-d99a-42eb-81f5-a212b750b8f4" volumeName="kubernetes.io/secret/8a0944d2-d99a-42eb-81f5-a212b750b8f4-metrics-tls" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718820 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a" volumeName="kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-trusted-ca-bundle" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718844 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c074751c-6b3c-44df-aca5-42fa69662779" volumeName="kubernetes.io/projected/c074751c-6b3c-44df-aca5-42fa69662779-kube-api-access-bbztv" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718870 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2b92a53-0b61-4e1d-a306-f9a498e48b38" volumeName="kubernetes.io/configmap/f2b92a53-0b61-4e1d-a306-f9a498e48b38-trusted-ca" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718888 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20dc979a-732b-43b5-acc2-118e4c350470" volumeName="kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-env-overrides" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718898 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb8907fd-35dd-452a-8032-f2f95a6e553a" volumeName="kubernetes.io/secret/eb8907fd-35dd-452a-8032-f2f95a6e553a-webhook-cert" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718925 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="35d8f08f-4c57-44e0-8e8f-3969287e2a5a" volumeName="kubernetes.io/empty-dir/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-catalog-content" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.718983 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3c24b6e2-965b-4b4f-ad65-ded7b3cc3971" volumeName="kubernetes.io/projected/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-kube-api-access-qwfnk" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719004 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bc77989-ecfc-4500-92a0-18c2b3b78408" volumeName="kubernetes.io/secret/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719022 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2f2982b-2117-4c16-a4e3-f7e14c7ddc41" volumeName="kubernetes.io/projected/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-kube-api-access" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719034 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2b92a53-0b61-4e1d-a306-f9a498e48b38" volumeName="kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719054 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02879f34-7062-4f07-9a5a-f965103d9182" volumeName="kubernetes.io/projected/02879f34-7062-4f07-9a5a-f965103d9182-kube-api-access-jbv4l" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719078 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="369e9689-e2f6-4276-b096-8db094f8d6ae" volumeName="kubernetes.io/projected/369e9689-e2f6-4276-b096-8db094f8d6ae-kube-api-access-crbvx" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719092 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73c93ee3-cf14-4fea-b2a7-ccfb56e55be4" volumeName="kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-bound-sa-token" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719108 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" volumeName="kubernetes.io/projected/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-kube-api-access-w6bfw" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719120 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4d0b174-33e4-46ee-863b-b5cc2a271b85" volumeName="kubernetes.io/projected/e4d0b174-33e4-46ee-863b-b5cc2a271b85-kube-api-access" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719135 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47f82c03-65d1-4a6c-ba09-8a00ae778009" volumeName="kubernetes.io/projected/47f82c03-65d1-4a6c-ba09-8a00ae778009-kube-api-access-ghzrb" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719189 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b856d226-a137-4954-82c5-5929d579387a" volumeName="kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719203 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65cfa12a-0711-4fba-8859-73a3f8f250a9" volumeName="kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-config" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719234 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3c0d0048-6d96-459c-8742-2f092af44a6a" volumeName="kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-tls" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719279 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4086d06f-d50e-4632-9da7-508909429eef" volumeName="kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-cni-binary-copy" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719299 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b79758b7-9129-496c-abec-80d455648454" volumeName="kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-client-certs" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719312 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c106be-27ea-4849-b365-eff6d25f5e71" volumeName="kubernetes.io/secret/f3c106be-27ea-4849-b365-eff6d25f5e71-proxy-tls" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719367 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="317a89ea-e9dd-4167-8568-bb36e2431015" volumeName="kubernetes.io/empty-dir/317a89ea-e9dd-4167-8568-bb36e2431015-utilities" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719389 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a951627-c032-4846-821c-c4bcbf4a91b9" volumeName="kubernetes.io/projected/7a951627-c032-4846-821c-c4bcbf4a91b9-kube-api-access-wxn4v" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719402 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e309570-09d0-412a-a74b-c5397d048a30" volumeName="kubernetes.io/secret/7e309570-09d0-412a-a74b-c5397d048a30-samples-operator-tls" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719419 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ce8e99d-7b02-4bf4-a438-adde851918cb" volumeName="kubernetes.io/secret/8ce8e99d-7b02-4bf4-a438-adde851918cb-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719461 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e0fa133-60e7-47d0-996e-7e85aef2a218" volumeName="kubernetes.io/empty-dir/2e0fa133-60e7-47d0-996e-7e85aef2a218-utilities" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719496 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2f2982b-2117-4c16-a4e3-f7e14c7ddc41" volumeName="kubernetes.io/configmap/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-config" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719538 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="83a4f641-d28f-42aa-a228-f6086d720fe4" volumeName="kubernetes.io/projected/83a4f641-d28f-42aa-a228-f6086d720fe4-kube-api-access-9hb2q" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719553 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da6a763d-2777-40c4-ae1f-c77ced406ea2" volumeName="kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719569 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73c93ee3-cf14-4fea-b2a7-ccfb56e55be4" volumeName="kubernetes.io/configmap/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-trusted-ca" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719587 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ed4f640-d481-4e7a-92eb-f0eda82e138c" volumeName="kubernetes.io/projected/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-api-access-xhmmv" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719599 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17adbc1a-f29c-4278-b29a-0cc3879b753f" volumeName="kubernetes.io/secret/17adbc1a-f29c-4278-b29a-0cc3879b753f-proxy-tls" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719625 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="baeb6380-95e4-4e10-9798-e1e22f20bade" volumeName="kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-kube-api-access-xlm4c" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719658 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" volumeName="kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-default-certificate" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719689 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4671673d-afa0-481f-b3a2-2c2b9441b6ce" volumeName="kubernetes.io/secret/4671673d-afa0-481f-b3a2-2c2b9441b6ce-metrics-tls" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719702 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65cfa12a-0711-4fba-8859-73a3f8f250a9" volumeName="kubernetes.io/projected/65cfa12a-0711-4fba-8859-73a3f8f250a9-kube-api-access-xhzcj" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719714 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a01c92f5-7938-437d-8262-11598bd8023c" volumeName="kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-config" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719730 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2e2ef3a-a6e9-44dc-93c7-9f533e75502a" volumeName="kubernetes.io/secret/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-machine-api-operator-tls" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719744 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2b92a53-0b61-4e1d-a306-f9a498e48b38" volumeName="kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-kube-api-access-j5mgr" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719761 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20dc979a-732b-43b5-acc2-118e4c350470" volumeName="kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-script-lib" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719776 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="369e9689-e2f6-4276-b096-8db094f8d6ae" volumeName="kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719795 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3c24b6e2-965b-4b4f-ad65-ded7b3cc3971" volumeName="kubernetes.io/configmap/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-iptables-alerter-script" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719826 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="933a37fd-d76a-4f60-8dd8-301fb73daf42" volumeName="kubernetes.io/projected/933a37fd-d76a-4f60-8dd8-301fb73daf42-kube-api-access-5w454" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719839 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf0ea4e-8b08-488f-b252-39580f46b756" volumeName="kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-service-ca" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719858 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16a930da-d793-486f-bcef-cf042d3c427d" volumeName="kubernetes.io/secret/16a930da-d793-486f-bcef-cf042d3c427d-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719870 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20dc979a-732b-43b5-acc2-118e4c350470" volumeName="kubernetes.io/projected/20dc979a-732b-43b5-acc2-118e4c350470-kube-api-access-wnvfd" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719882 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bc77989-ecfc-4500-92a0-18c2b3b78408" volumeName="kubernetes.io/projected/4bc77989-ecfc-4500-92a0-18c2b3b78408-kube-api-access-brvlj" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719915 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="734f9f10-5bde-44d5-a831-021b93fd667d" volumeName="kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-auth-proxy-config" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.719947 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="734f9f10-5bde-44d5-a831-021b93fd667d" volumeName="kubernetes.io/secret/734f9f10-5bde-44d5-a831-021b93fd667d-machine-approver-tls" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720033 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd033b5b-af07-4e69-9a5c-46f7c9bde95a" volumeName="kubernetes.io/configmap/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-auth-proxy-config" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720071 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c074751c-6b3c-44df-aca5-42fa69662779" volumeName="kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-service-ca-bundle" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720083 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0213214b-693b-411b-8254-48d7826011eb" volumeName="kubernetes.io/secret/0213214b-693b-411b-8254-48d7826011eb-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720097 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b856d226-a137-4954-82c5-5929d579387a" volumeName="kubernetes.io/configmap/b856d226-a137-4954-82c5-5929d579387a-metrics-client-ca" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720123 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cb471665-2b07-48df-9881-3fb663390b23" volumeName="kubernetes.io/projected/cb471665-2b07-48df-9881-3fb663390b23-kube-api-access-6f8xk" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720133 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="35925474-e3fe-4cff-aad6-d853816618c7" volumeName="kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720144 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b79758b7-9129-496c-abec-80d455648454" volumeName="kubernetes.io/empty-dir/b79758b7-9129-496c-abec-80d455648454-audit-log" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720153 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c074751c-6b3c-44df-aca5-42fa69662779" volumeName="kubernetes.io/empty-dir/c074751c-6b3c-44df-aca5-42fa69662779-snapshots" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720164 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb8907fd-35dd-452a-8032-f2f95a6e553a" volumeName="kubernetes.io/projected/eb8907fd-35dd-452a-8032-f2f95a6e553a-kube-api-access-k254v" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720182 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b41c9132-92ef-429d-bdd5-9bdb024e04fc" volumeName="kubernetes.io/projected/b41c9132-92ef-429d-bdd5-9bdb024e04fc-kube-api-access-wlbm6" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720191 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf0ea4e-8b08-488f-b252-39580f46b756" volumeName="kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-ca" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720202 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="375d5112-d2be-47cf-bee1-82614ba71ed8" volumeName="kubernetes.io/projected/375d5112-d2be-47cf-bee1-82614ba71ed8-kube-api-access-d4dcj" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720227 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="375d5112-d2be-47cf-bee1-82614ba71ed8" volumeName="kubernetes.io/secret/375d5112-d2be-47cf-bee1-82614ba71ed8-apiservice-cert" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720243 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee1eb80b-5a76-443f-a534-54d5bdc0c98a" volumeName="kubernetes.io/projected/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-kube-api-access-qvxs4" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720254 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2385db6b-4286-4839-822c-aa9c52290172" volumeName="kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-auth-proxy-config" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720276 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cb471665-2b07-48df-9881-3fb663390b23" volumeName="kubernetes.io/secret/cb471665-2b07-48df-9881-3fb663390b23-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720302 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2e2ef3a-a6e9-44dc-93c7-9f533e75502a" volumeName="kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-config" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720319 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2b92a53-0b61-4e1d-a306-f9a498e48b38" volumeName="kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-bound-sa-token" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720331 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad93612-ab12-4b30-984f-119e1b924a84" volumeName="kubernetes.io/projected/1ad93612-ab12-4b30-984f-119e1b924a84-kube-api-access-xzldt" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720357 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb8907fd-35dd-452a-8032-f2f95a6e553a" volumeName="kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-ovnkube-identity-cm" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720366 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9a9baa5-9334-47dc-8d0c-eafc96a679b3" volumeName="kubernetes.io/secret/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720377 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="234a5a6c-3790-49d0-b1e7-86f81048d96a" volumeName="kubernetes.io/projected/234a5a6c-3790-49d0-b1e7-86f81048d96a-ca-certs" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720386 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a039fc2-b0af-4b2c-a884-1c274c08064d" volumeName="kubernetes.io/secret/3a039fc2-b0af-4b2c-a884-1c274c08064d-signing-key" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720397 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4086d06f-d50e-4632-9da7-508909429eef" volumeName="kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-multus-daemon-config" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720405 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ce8e99d-7b02-4bf4-a438-adde851918cb" volumeName="kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-config" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720425 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0213214b-693b-411b-8254-48d7826011eb" volumeName="kubernetes.io/empty-dir/0213214b-693b-411b-8254-48d7826011eb-available-featuregates" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720472 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a01c92f5-7938-437d-8262-11598bd8023c" volumeName="kubernetes.io/projected/a01c92f5-7938-437d-8262-11598bd8023c-kube-api-access-qc69w" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720482 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="baeb6380-95e4-4e10-9798-e1e22f20bade" volumeName="kubernetes.io/empty-dir/baeb6380-95e4-4e10-9798-e1e22f20bade-cache" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720541 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="734f9f10-5bde-44d5-a831-021b93fd667d" volumeName="kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-config" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720551 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" volumeName="kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-metrics-certs" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720560 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73c93ee3-cf14-4fea-b2a7-ccfb56e55be4" volumeName="kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720572 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92e396cd-a0d9-4b6b-9d82-add1ce2a8712" volumeName="kubernetes.io/secret/92e396cd-a0d9-4b6b-9d82-add1ce2a8712-tls-certificates" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720591 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46ae7b31-c91c-477e-a04a-a3a8541747be" volumeName="kubernetes.io/projected/46ae7b31-c91c-477e-a04a-a3a8541747be-kube-api-access-zwsns" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720613 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="234a5a6c-3790-49d0-b1e7-86f81048d96a" volumeName="kubernetes.io/projected/234a5a6c-3790-49d0-b1e7-86f81048d96a-kube-api-access-pp5xj" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720623 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a039fc2-b0af-4b2c-a884-1c274c08064d" volumeName="kubernetes.io/configmap/3a039fc2-b0af-4b2c-a884-1c274c08064d-signing-cabundle" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720632 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3c0d0048-6d96-459c-8742-2f092af44a6a" volumeName="kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720648 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b41c9132-92ef-429d-bdd5-9bdb024e04fc" volumeName="kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-image-import-ca" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720657 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf0ea4e-8b08-488f-b252-39580f46b756" volumeName="kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-config" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720669 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b856d226-a137-4954-82c5-5929d579387a" volumeName="kubernetes.io/projected/b856d226-a137-4954-82c5-5929d579387a-kube-api-access-n2msq" seLinuxMountContext="" Mar 18 13:23:42.721435 master-0 kubenswrapper[28504]: I0318 13:23:42.720700 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cb471665-2b07-48df-9881-3fb663390b23" volumeName="kubernetes.io/configmap/cb471665-2b07-48df-9881-3fb663390b23-config" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723201 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee1eb80b-5a76-443f-a534-54d5bdc0c98a" volumeName="kubernetes.io/configmap/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-telemetry-config" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723307 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e0fa133-60e7-47d0-996e-7e85aef2a218" volumeName="kubernetes.io/empty-dir/2e0fa133-60e7-47d0-996e-7e85aef2a218-catalog-content" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723344 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="330df925-8429-4b96-9bfe-caa017c21afa" volumeName="kubernetes.io/projected/330df925-8429-4b96-9bfe-caa017c21afa-kube-api-access-2sqzx" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723375 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="369e9689-e2f6-4276-b096-8db094f8d6ae" volumeName="kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723402 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e309570-09d0-412a-a74b-c5397d048a30" volumeName="kubernetes.io/projected/7e309570-09d0-412a-a74b-c5397d048a30-kube-api-access-mcfq7" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723422 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20dc979a-732b-43b5-acc2-118e4c350470" volumeName="kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-config" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723438 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3f208f9-e2e1-4fae-a47a-f58b722e0ad5" volumeName="kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-auth-proxy-config" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723478 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3c0d0048-6d96-459c-8742-2f092af44a6a" volumeName="kubernetes.io/projected/3c0d0048-6d96-459c-8742-2f092af44a6a-kube-api-access-2s9rk" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723586 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="93ea3c78-dede-468f-89a5-551133f794c5" volumeName="kubernetes.io/configmap/93ea3c78-dede-468f-89a5-551133f794c5-config" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723622 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a" volumeName="kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723642 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b41c9132-92ef-429d-bdd5-9bdb024e04fc" volumeName="kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-etcd-serving-ca" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723657 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bc77989-ecfc-4500-92a0-18c2b3b78408" volumeName="kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovnkube-config" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723752 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c106be-27ea-4849-b365-eff6d25f5e71" volumeName="kubernetes.io/configmap/f3c106be-27ea-4849-b365-eff6d25f5e71-mcd-auth-proxy-config" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723775 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="369e9689-e2f6-4276-b096-8db094f8d6ae" volumeName="kubernetes.io/configmap/369e9689-e2f6-4276-b096-8db094f8d6ae-trusted-ca" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723796 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="317a89ea-e9dd-4167-8568-bb36e2431015" volumeName="kubernetes.io/empty-dir/317a89ea-e9dd-4167-8568-bb36e2431015-catalog-content" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723833 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="35d8f08f-4c57-44e0-8e8f-3969287e2a5a" volumeName="kubernetes.io/empty-dir/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-utilities" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723900 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46ae7b31-c91c-477e-a04a-a3a8541747be" volumeName="kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723953 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ed4f640-d481-4e7a-92eb-f0eda82e138c" volumeName="kubernetes.io/empty-dir/6ed4f640-d481-4e7a-92eb-f0eda82e138c-volume-directive-shadow" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723971 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="83a4f641-d28f-42aa-a228-f6086d720fe4" volumeName="kubernetes.io/configmap/83a4f641-d28f-42aa-a228-f6086d720fe4-config" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.723990 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5a93d05-3c8e-4666-9a55-d8f9e902db78" volumeName="kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-config" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.724025 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" volumeName="kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-stats-auth" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.724046 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad580a2-7f58-4d66-adad-0a53d9777655" volumeName="kubernetes.io/secret/1ad580a2-7f58-4d66-adad-0a53d9777655-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.724072 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc9af4af-fb39-4a51-83ae-dab3f1d159f2" volumeName="kubernetes.io/projected/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-kube-api-access-twczm" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.724087 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36db10b8-33a2-4b54-85e2-9809eb6bc37d" volumeName="kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.724125 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47f82c03-65d1-4a6c-ba09-8a00ae778009" volumeName="kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.724159 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" volumeName="kubernetes.io/configmap/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-service-ca-bundle" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.724172 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b79758b7-9129-496c-abec-80d455648454" volumeName="kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.724189 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc9af4af-fb39-4a51-83ae-dab3f1d159f2" volumeName="kubernetes.io/secret/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-webhook-certs" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.724202 28504 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f16e797-a619-46a8-948a-9fdfc8a9891f" volumeName="kubernetes.io/empty-dir/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-tuned" seLinuxMountContext="" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.724214 28504 reconstruct.go:97] "Volume reconstruction finished" Mar 18 13:23:42.727634 master-0 kubenswrapper[28504]: I0318 13:23:42.724225 28504 reconciler.go:26] "Reconciler: start to sync state" Mar 18 13:23:42.740888 master-0 kubenswrapper[28504]: I0318 13:23:42.740741 28504 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 13:23:42.747881 master-0 kubenswrapper[28504]: I0318 13:23:42.747830 28504 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 13:23:42.747881 master-0 kubenswrapper[28504]: I0318 13:23:42.747885 28504 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 13:23:42.748141 master-0 kubenswrapper[28504]: I0318 13:23:42.747914 28504 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 13:23:42.748141 master-0 kubenswrapper[28504]: E0318 13:23:42.747983 28504 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 13:23:42.768566 master-0 kubenswrapper[28504]: I0318 13:23:42.768532 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-nf22v_d2e2ef3a-a6e9-44dc-93c7-9f533e75502a/machine-api-operator/0.log" Mar 18 13:23:42.769050 master-0 kubenswrapper[28504]: I0318 13:23:42.769019 28504 generic.go:334] "Generic (PLEG): container finished" podID="d2e2ef3a-a6e9-44dc-93c7-9f533e75502a" containerID="35a6e219e9c2c306481d98d16c4ce589a46a92dae3b8a5616cb81c85790b7339" exitCode=255 Mar 18 13:23:42.777131 master-0 kubenswrapper[28504]: I0318 13:23:42.777089 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-7w5g8_a01c92f5-7938-437d-8262-11598bd8023c/cluster-baremetal-operator/2.log" Mar 18 13:23:42.777893 master-0 kubenswrapper[28504]: I0318 13:23:42.777864 28504 generic.go:334] "Generic (PLEG): container finished" podID="a01c92f5-7938-437d-8262-11598bd8023c" containerID="3fe2b30d3a88bc253d1cf4b9fbf09e7d5bc69a80e3d0a14ba44ecbd5f6425a1e" exitCode=1 Mar 18 13:23:42.787271 master-0 kubenswrapper[28504]: I0318 13:23:42.787070 28504 generic.go:334] "Generic (PLEG): container finished" podID="2385db6b-4286-4839-822c-aa9c52290172" containerID="76706e531d703321ab797434284e0ec77d46262c1f93022a12f301f5e424b532" exitCode=0 Mar 18 13:23:42.805713 master-0 kubenswrapper[28504]: E0318 13:23:42.805447 28504 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:23:42.816721 master-0 kubenswrapper[28504]: I0318 13:23:42.816670 28504 generic.go:334] "Generic (PLEG): container finished" podID="e4d0b174-33e4-46ee-863b-b5cc2a271b85" containerID="1b8157f4c23747a17d99cd1a75b5fd67d7d1923b9d3c78ebf701ed19d3b1c48e" exitCode=0 Mar 18 13:23:42.820591 master-0 kubenswrapper[28504]: I0318 13:23:42.819272 28504 generic.go:334] "Generic (PLEG): container finished" podID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerID="0aa6e3b114a8524f519800cee5439f1ad3e156a1def4a154cf20f82ebe9a3ef2" exitCode=0 Mar 18 13:23:42.822667 master-0 kubenswrapper[28504]: I0318 13:23:42.822617 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-q8vxr_bd033b5b-af07-4e69-9a5c-46f7c9bde95a/cluster-autoscaler-operator/0.log" Mar 18 13:23:42.825796 master-0 kubenswrapper[28504]: I0318 13:23:42.824366 28504 generic.go:334] "Generic (PLEG): container finished" podID="bd033b5b-af07-4e69-9a5c-46f7c9bde95a" containerID="e20cb392c2151c9b567d2f9cb92d9caffc6ffa0a0c94ec6c22fe2417cecc2fef" exitCode=255 Mar 18 13:23:42.829363 master-0 kubenswrapper[28504]: I0318 13:23:42.829304 28504 generic.go:334] "Generic (PLEG): container finished" podID="9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a" containerID="753ffebdad8f9e4671d1507f1e261536c6a9a0234c3ae2147357296698c58faf" exitCode=0 Mar 18 13:23:42.834534 master-0 kubenswrapper[28504]: I0318 13:23:42.833764 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-ncjbh_d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/config-sync-controllers/0.log" Mar 18 13:23:42.834534 master-0 kubenswrapper[28504]: I0318 13:23:42.834458 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-ncjbh_d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/cluster-cloud-controller-manager/0.log" Mar 18 13:23:42.834996 master-0 kubenswrapper[28504]: I0318 13:23:42.834767 28504 generic.go:334] "Generic (PLEG): container finished" podID="d3f208f9-e2e1-4fae-a47a-f58b722e0ad5" containerID="416f123fbbc7d637d66d383e9de461fd5b529d5d437df7cc58e7901b8e2c57aa" exitCode=1 Mar 18 13:23:42.835055 master-0 kubenswrapper[28504]: I0318 13:23:42.834993 28504 generic.go:334] "Generic (PLEG): container finished" podID="d3f208f9-e2e1-4fae-a47a-f58b722e0ad5" containerID="59e43a5798785560fb9b5499b32da91edb8ae46a4589c047f8415fd258612a45" exitCode=1 Mar 18 13:23:42.848181 master-0 kubenswrapper[28504]: E0318 13:23:42.848099 28504 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 13:23:42.850826 master-0 kubenswrapper[28504]: I0318 13:23:42.850040 28504 generic.go:334] "Generic (PLEG): container finished" podID="17adbc1a-f29c-4278-b29a-0cc3879b753f" containerID="ea098486f4dc00d516848689091052951444062d9e2ae5ef81e67aadee11ef6e" exitCode=0 Mar 18 13:23:42.857455 master-0 kubenswrapper[28504]: I0318 13:23:42.855747 28504 generic.go:334] "Generic (PLEG): container finished" podID="89d262b4-b1a7-49b8-a8d2-1bb1ea671df8" containerID="56a1ebe6b0097c7f125f082d76f61cc3fac21860bdfba2e3c6f543dc04756bf5" exitCode=0 Mar 18 13:23:42.900531 master-0 kubenswrapper[28504]: I0318 13:23:42.900466 28504 generic.go:334] "Generic (PLEG): container finished" podID="b856d226-a137-4954-82c5-5929d579387a" containerID="9d044af973bd01a08e8fcad763eafdffa737337304ddc7ac842ceb7418ae0dec" exitCode=0 Mar 18 13:23:42.905200 master-0 kubenswrapper[28504]: I0318 13:23:42.905162 28504 generic.go:334] "Generic (PLEG): container finished" podID="3a039fc2-b0af-4b2c-a884-1c274c08064d" containerID="d7e8c2fdb968a1130191a8765d10f0d71f285ef10fc757a0ab5ebbff82c6fcc5" exitCode=0 Mar 18 13:23:42.905601 master-0 kubenswrapper[28504]: E0318 13:23:42.905567 28504 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:23:42.921604 master-0 kubenswrapper[28504]: I0318 13:23:42.921558 28504 generic.go:334] "Generic (PLEG): container finished" podID="a5a93d05-3c8e-4666-9a55-d8f9e902db78" containerID="b10031bd90b55a9a696a81d72f5edb8059040095aa52e3160902d05b4a7cd6cf" exitCode=0 Mar 18 13:23:42.925442 master-0 kubenswrapper[28504]: I0318 13:23:42.925392 28504 generic.go:334] "Generic (PLEG): container finished" podID="16a930da-d793-486f-bcef-cf042d3c427d" containerID="3683163827a4edece2407b15e519e57ed5810d9901b275e4063ae3e6c8a46a7c" exitCode=0 Mar 18 13:23:42.925558 master-0 kubenswrapper[28504]: I0318 13:23:42.925455 28504 generic.go:334] "Generic (PLEG): container finished" podID="16a930da-d793-486f-bcef-cf042d3c427d" containerID="aa564c30adb5b4df8107a74993a455b716489617f02c382f60c47021de96afac" exitCode=0 Mar 18 13:23:42.925558 master-0 kubenswrapper[28504]: I0318 13:23:42.925466 28504 generic.go:334] "Generic (PLEG): container finished" podID="16a930da-d793-486f-bcef-cf042d3c427d" containerID="7c8f77a7d65f8fc3bc4cbe1de5c1b2400c99f286cccd6e89e58de1418e09f721" exitCode=0 Mar 18 13:23:42.931075 master-0 kubenswrapper[28504]: I0318 13:23:42.930088 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-8jrfz_234a5a6c-3790-49d0-b1e7-86f81048d96a/manager/0.log" Mar 18 13:23:42.932472 master-0 kubenswrapper[28504]: I0318 13:23:42.932426 28504 generic.go:334] "Generic (PLEG): container finished" podID="234a5a6c-3790-49d0-b1e7-86f81048d96a" containerID="e421e24f0032092d372aa8567bf62089ec16fcc76e9db4714f59ae66d20632af" exitCode=1 Mar 18 13:23:42.937231 master-0 kubenswrapper[28504]: I0318 13:23:42.937180 28504 generic.go:334] "Generic (PLEG): container finished" podID="1bf0ea4e-8b08-488f-b252-39580f46b756" containerID="cdeecfaffa91bced4d378bfbb335379410c275c90260acdb4404f15430b5fb3b" exitCode=0 Mar 18 13:23:42.940919 master-0 kubenswrapper[28504]: I0318 13:23:42.940702 28504 generic.go:334] "Generic (PLEG): container finished" podID="cb471665-2b07-48df-9881-3fb663390b23" containerID="68c5ffa759fcc437f54d7bd3e789e8c2d2ddd9ad3679a98335c6cd2c8429c33c" exitCode=0 Mar 18 13:23:42.944324 master-0 kubenswrapper[28504]: I0318 13:23:42.944133 28504 generic.go:334] "Generic (PLEG): container finished" podID="2e0fa133-60e7-47d0-996e-7e85aef2a218" containerID="d80a42f9544d6f5e1c4d2d61a2c430a6b656748331ed61e7746687405bcba5ee" exitCode=0 Mar 18 13:23:42.944324 master-0 kubenswrapper[28504]: I0318 13:23:42.944158 28504 generic.go:334] "Generic (PLEG): container finished" podID="2e0fa133-60e7-47d0-996e-7e85aef2a218" containerID="836f1a7c930855d212400a0b9071a021a023048ff4b32354f92013971f61bd95" exitCode=0 Mar 18 13:23:42.946603 master-0 kubenswrapper[28504]: I0318 13:23:42.946576 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-mk4d5_8a0944d2-d99a-42eb-81f5-a212b750b8f4/network-operator/0.log" Mar 18 13:23:42.946738 master-0 kubenswrapper[28504]: I0318 13:23:42.946710 28504 generic.go:334] "Generic (PLEG): container finished" podID="8a0944d2-d99a-42eb-81f5-a212b750b8f4" containerID="6b882cdda72d564225a61ad06267c4be93a7acf1cff49af344ca080e3af8cb10" exitCode=255 Mar 18 13:23:42.952868 master-0 kubenswrapper[28504]: I0318 13:23:42.952826 28504 generic.go:334] "Generic (PLEG): container finished" podID="c9a9baa5-9334-47dc-8d0c-eafc96a679b3" containerID="50dc217c7e050a83d8f94c0b071aa6cc499aaacdf4273693193aaa83fb657bb6" exitCode=0 Mar 18 13:23:42.962856 master-0 kubenswrapper[28504]: I0318 13:23:42.962812 28504 generic.go:334] "Generic (PLEG): container finished" podID="330df925-8429-4b96-9bfe-caa017c21afa" containerID="25a6724684f01c1f8f810c77d2f577ea86053b8875f39a3ebd8958705d59785e" exitCode=0 Mar 18 13:23:42.972855 master-0 kubenswrapper[28504]: I0318 13:23:42.972788 28504 generic.go:334] "Generic (PLEG): container finished" podID="8e27b7d086edf5d2cf47b703574641d8" containerID="6d83f8447a991a30e2932285a9ad9391e4be4f81c9b4bec0c838fb37dccbbcda" exitCode=0 Mar 18 13:23:42.982446 master-0 kubenswrapper[28504]: I0318 13:23:42.982315 28504 generic.go:334] "Generic (PLEG): container finished" podID="83a4f641-d28f-42aa-a228-f6086d720fe4" containerID="f0be59386377b23fb8fc7601c10eb271b7e5a273e5f53453eae290b11eb4345f" exitCode=0 Mar 18 13:23:42.983829 master-0 kubenswrapper[28504]: I0318 13:23:42.983788 28504 generic.go:334] "Generic (PLEG): container finished" podID="2669bc40-9271-4494-9e21-290cd4383b05" containerID="da68cebc5e87d23d463a0c9379a0a5014fb73cbd24809cddd09f3686c920cb75" exitCode=0 Mar 18 13:23:42.989675 master-0 kubenswrapper[28504]: I0318 13:23:42.989644 28504 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="ab5b83d779ab6537d0a99adbe63763b23469f75fb94b22198d32842d6404c007" exitCode=0 Mar 18 13:23:42.989675 master-0 kubenswrapper[28504]: I0318 13:23:42.989670 28504 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="939081bad25da33d133eff9bd4c3f679efe60bd386467b9c7ea166c2edea2ccd" exitCode=0 Mar 18 13:23:42.989675 master-0 kubenswrapper[28504]: I0318 13:23:42.989679 28504 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="5308c4990ee617dab17b794620acded12b71b96d5a2e7a368924488be2073775" exitCode=0 Mar 18 13:23:42.989675 master-0 kubenswrapper[28504]: I0318 13:23:42.989686 28504 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="158c0af92fac11481577106174b03b386a7b412c2e448451da762deb74b713bd" exitCode=0 Mar 18 13:23:42.989860 master-0 kubenswrapper[28504]: I0318 13:23:42.989693 28504 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="9b8b0976c817ccd695886d1ba83ffcc31d11cd506356512ccbdf4d71a9024f68" exitCode=0 Mar 18 13:23:42.989860 master-0 kubenswrapper[28504]: I0318 13:23:42.989701 28504 generic.go:334] "Generic (PLEG): container finished" podID="46ae7b31-c91c-477e-a04a-a3a8541747be" containerID="3ae100b68292305eb4454b58c0f9a6577d27f65eaa549bd19854723db5585aee" exitCode=0 Mar 18 13:23:42.991981 master-0 kubenswrapper[28504]: I0318 13:23:42.991912 28504 generic.go:334] "Generic (PLEG): container finished" podID="8ce8e99d-7b02-4bf4-a438-adde851918cb" containerID="f140128413a59472c05ccbf8a67ba06b17c2bdd86a6d5881d2c8c4864d65b7ae" exitCode=0 Mar 18 13:23:42.997362 master-0 kubenswrapper[28504]: I0318 13:23:42.997324 28504 generic.go:334] "Generic (PLEG): container finished" podID="1ad580a2-7f58-4d66-adad-0a53d9777655" containerID="9d80034b295c4c336556d93672546628c76e7f2de665797ca7d2385c75fae222" exitCode=0 Mar 18 13:23:42.999856 master-0 kubenswrapper[28504]: I0318 13:23:42.999776 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/4.log" Mar 18 13:23:43.000556 master-0 kubenswrapper[28504]: I0318 13:23:43.000529 28504 generic.go:334] "Generic (PLEG): container finished" podID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" containerID="5b4c84f643c308b4da498c0b191698ddaa3218b818a04462557cc1d1c093013c" exitCode=1 Mar 18 13:23:43.004677 master-0 kubenswrapper[28504]: I0318 13:23:43.004660 28504 generic.go:334] "Generic (PLEG): container finished" podID="5879ced8-4ac1-40e3-bf93-38b8a7497823" containerID="eb1bc4c2de4eef02c4efa419b662829eddc1e0031cc060ee0744bc0347f66eeb" exitCode=0 Mar 18 13:23:43.006119 master-0 kubenswrapper[28504]: E0318 13:23:43.006089 28504 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:23:43.007596 master-0 kubenswrapper[28504]: I0318 13:23:43.007560 28504 generic.go:334] "Generic (PLEG): container finished" podID="317a89ea-e9dd-4167-8568-bb36e2431015" containerID="a1b3cbd921167497cc25d4be38b7e050d4aa38d0f715b02595c72432dd0720c9" exitCode=0 Mar 18 13:23:43.007596 master-0 kubenswrapper[28504]: I0318 13:23:43.007594 28504 generic.go:334] "Generic (PLEG): container finished" podID="317a89ea-e9dd-4167-8568-bb36e2431015" containerID="96bc6ce0c52ae9fd7504e8b7f02dc2906216b82766d2b59e05d4794bbbc1c386" exitCode=0 Mar 18 13:23:43.009431 master-0 kubenswrapper[28504]: I0318 13:23:43.009415 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-f8zc2_734f9f10-5bde-44d5-a831-021b93fd667d/machine-approver-controller/0.log" Mar 18 13:23:43.009844 master-0 kubenswrapper[28504]: I0318 13:23:43.009820 28504 generic.go:334] "Generic (PLEG): container finished" podID="734f9f10-5bde-44d5-a831-021b93fd667d" containerID="bf9efcefa6211001d8f08607f67b510663e50278def7ed0ac4963e0d3210e802" exitCode=255 Mar 18 13:23:43.012768 master-0 kubenswrapper[28504]: I0318 13:23:43.012578 28504 generic.go:334] "Generic (PLEG): container finished" podID="c2c4572e-0b38-4db1-96e5-6a35e29048e7" containerID="d02c6c3cdba1a1883c0637cac9a306051c4ef216e0033461edc5cc690bbb087e" exitCode=0 Mar 18 13:23:43.022311 master-0 kubenswrapper[28504]: I0318 13:23:43.021923 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-kbpvr_36db10b8-33a2-4b54-85e2-9809eb6bc37d/package-server-manager/0.log" Mar 18 13:23:43.022491 master-0 kubenswrapper[28504]: I0318 13:23:43.022461 28504 generic.go:334] "Generic (PLEG): container finished" podID="36db10b8-33a2-4b54-85e2-9809eb6bc37d" containerID="763c041e89e36c29391b2cb35cd74d0ff6b0e6c63f07f02d238f792452bdf127" exitCode=1 Mar 18 13:23:43.049002 master-0 kubenswrapper[28504]: I0318 13:23:43.030393 28504 generic.go:334] "Generic (PLEG): container finished" podID="810ed1fb-bd32-4e5d-94e6-011f21ff37d3" containerID="476726e8baea3eb0038921569d3e349c70ed11ed86a08818d39ebf2ee00767e9" exitCode=0 Mar 18 13:23:43.049002 master-0 kubenswrapper[28504]: I0318 13:23:43.043884 28504 generic.go:334] "Generic (PLEG): container finished" podID="b41c9132-92ef-429d-bdd5-9bdb024e04fc" containerID="4cfcb6d43544aaea92892e1f33a27bf4899640538c587e1c1eacf22ba718bb42" exitCode=0 Mar 18 13:23:43.049002 master-0 kubenswrapper[28504]: I0318 13:23:43.047668 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_2fca2c29-3791-43b8-97f1-a9a6d58ec92d/installer/0.log" Mar 18 13:23:43.049002 master-0 kubenswrapper[28504]: I0318 13:23:43.047721 28504 generic.go:334] "Generic (PLEG): container finished" podID="2fca2c29-3791-43b8-97f1-a9a6d58ec92d" containerID="e194112a7651927c16369879335d3ba30bda7302ae714dc813e610c582b27c4a" exitCode=1 Mar 18 13:23:43.049002 master-0 kubenswrapper[28504]: E0318 13:23:43.048182 28504 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 13:23:43.049806 master-0 kubenswrapper[28504]: I0318 13:23:43.049778 28504 generic.go:334] "Generic (PLEG): container finished" podID="35d8f08f-4c57-44e0-8e8f-3969287e2a5a" containerID="ab2a031030fcae05fc3de61ba8959c18a5ad439c27b9db65dec83eb634e7acf2" exitCode=0 Mar 18 13:23:43.049895 master-0 kubenswrapper[28504]: I0318 13:23:43.049806 28504 generic.go:334] "Generic (PLEG): container finished" podID="35d8f08f-4c57-44e0-8e8f-3969287e2a5a" containerID="d2897cc2c8562aeaec2aa9acaf8c187af617a13c66e8bd4ee5d5cb3869d53d9c" exitCode=0 Mar 18 13:23:43.059193 master-0 kubenswrapper[28504]: I0318 13:23:43.057099 28504 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="e8c91df65ad2a5d542d44c3e9b72528a23b6488107d0629d4bcdb9c771675658" exitCode=0 Mar 18 13:23:43.060294 master-0 kubenswrapper[28504]: I0318 13:23:43.060250 28504 generic.go:334] "Generic (PLEG): container finished" podID="5bccf60c-5b07-4f40-8430-12bfb62661c7" containerID="a2ae2420b34ef246b54f0a6fe9ec2894bc3cd6d0edd11b8cc50a2c6c8fb9ff32" exitCode=0 Mar 18 13:23:43.063488 master-0 kubenswrapper[28504]: I0318 13:23:43.063441 28504 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="a907a02503b5df781613b6da0961b359781cced0221882a7b1a1568fee1b84fe" exitCode=0 Mar 18 13:23:43.063488 master-0 kubenswrapper[28504]: I0318 13:23:43.063474 28504 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="7ddc54cddedd2bdae32224357d62187da26cebbd3a01e7a295c7e87fef85c020" exitCode=0 Mar 18 13:23:43.063488 master-0 kubenswrapper[28504]: I0318 13:23:43.063484 28504 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="dceb07db18c0d8faeb0249820c09e2ecee50c97d0f9fd01d9a209e9a350fd96e" exitCode=0 Mar 18 13:23:43.065351 master-0 kubenswrapper[28504]: I0318 13:23:43.065313 28504 generic.go:334] "Generic (PLEG): container finished" podID="93ea3c78-dede-468f-89a5-551133f794c5" containerID="ef423dc670cb4c823cf16513eca393eb2237d93c1c3d72d4a3125b276f8fdce7" exitCode=0 Mar 18 13:23:43.067716 master-0 kubenswrapper[28504]: I0318 13:23:43.067660 28504 generic.go:334] "Generic (PLEG): container finished" podID="e2f2982b-2117-4c16-a4e3-f7e14c7ddc41" containerID="73eeb12fc6c56e08bfbb513524488ba1e9f64fd246eaef82ed0bfd67ecb4ec86" exitCode=0 Mar 18 13:23:43.069393 master-0 kubenswrapper[28504]: I0318 13:23:43.069335 28504 generic.go:334] "Generic (PLEG): container finished" podID="c0403564-f8d9-4d81-b9e3-d9028fe58590" containerID="6c9c61fe13233fc2963a22bc53cbe738d781d6a4794b40b0e2484f290dbd30f4" exitCode=0 Mar 18 13:23:43.073737 master-0 kubenswrapper[28504]: I0318 13:23:43.073710 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-p6tvz_369e9689-e2f6-4276-b096-8db094f8d6ae/cluster-node-tuning-operator/0.log" Mar 18 13:23:43.073846 master-0 kubenswrapper[28504]: I0318 13:23:43.073749 28504 generic.go:334] "Generic (PLEG): container finished" podID="369e9689-e2f6-4276-b096-8db094f8d6ae" containerID="a4b53bab35719b1de9b4d4e1f4c3fdf356bb114dd12ac3e84e5af4fe101ae6bf" exitCode=1 Mar 18 13:23:43.079415 master-0 kubenswrapper[28504]: I0318 13:23:43.079368 28504 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="e371dab0b58bcbcc5b1907ef685fdfadda0906d8d24523dfbc948bf72419b864" exitCode=0 Mar 18 13:23:43.079415 master-0 kubenswrapper[28504]: I0318 13:23:43.079402 28504 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="f2eaa8545a70bd93c6fda5c0d0d68dc69b5076035140e52196a502a53a980e02" exitCode=0 Mar 18 13:23:43.079415 master-0 kubenswrapper[28504]: I0318 13:23:43.079412 28504 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="200e8cc7b998c12ebab49945348ad20ad11d9b022c6433d242aed2cda0e0a774" exitCode=0 Mar 18 13:23:43.081228 master-0 kubenswrapper[28504]: I0318 13:23:43.081188 28504 generic.go:334] "Generic (PLEG): container finished" podID="f32b4d4d-df54-4fa7-a940-297e064fea44" containerID="94d2bc335ae0ececbd31f7ab13a8fd2ea166534945dafb090b610544f37ca4e7" exitCode=0 Mar 18 13:23:43.086903 master-0 kubenswrapper[28504]: I0318 13:23:43.086836 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-4r95z_baeb6380-95e4-4e10-9798-e1e22f20bade/manager/0.log" Mar 18 13:23:43.087124 master-0 kubenswrapper[28504]: I0318 13:23:43.086915 28504 generic.go:334] "Generic (PLEG): container finished" podID="baeb6380-95e4-4e10-9798-e1e22f20bade" containerID="c8d0e68fce468a6cbf7a9e25b4e7afd1002b3dc75deb637dce883f568f47b361" exitCode=1 Mar 18 13:23:43.092181 master-0 kubenswrapper[28504]: I0318 13:23:43.092128 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-bjpp5_933a37fd-d76a-4f60-8dd8-301fb73daf42/control-plane-machine-set-operator/0.log" Mar 18 13:23:43.092181 master-0 kubenswrapper[28504]: I0318 13:23:43.092179 28504 generic.go:334] "Generic (PLEG): container finished" podID="933a37fd-d76a-4f60-8dd8-301fb73daf42" containerID="2442652c47cb11893c3b83d3fad2866d5f95d1a4285de57aa76d8638f0a3ca4c" exitCode=1 Mar 18 13:23:43.094795 master-0 kubenswrapper[28504]: I0318 13:23:43.094744 28504 generic.go:334] "Generic (PLEG): container finished" podID="73c93ee3-cf14-4fea-b2a7-ccfb56e55be4" containerID="ca9e7669e9cbda3d1efa1643b57ac236e8b9cc289164b306448a040fc87f9948" exitCode=0 Mar 18 13:23:43.099555 master-0 kubenswrapper[28504]: I0318 13:23:43.099491 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-xcbtb_eb8907fd-35dd-452a-8032-f2f95a6e553a/approver/1.log" Mar 18 13:23:43.100339 master-0 kubenswrapper[28504]: I0318 13:23:43.100250 28504 generic.go:334] "Generic (PLEG): container finished" podID="eb8907fd-35dd-452a-8032-f2f95a6e553a" containerID="0e76cffa571436858041a59dc3cb08e8f19f5b925a773925f3208413a9e44b8f" exitCode=1 Mar 18 13:23:43.102440 master-0 kubenswrapper[28504]: I0318 13:23:43.102281 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_88cd8323-8857-41fe-85d4-e6064330ec71/installer/0.log" Mar 18 13:23:43.102440 master-0 kubenswrapper[28504]: I0318 13:23:43.102355 28504 generic.go:334] "Generic (PLEG): container finished" podID="88cd8323-8857-41fe-85d4-e6064330ec71" containerID="2930eafa2605e45a0822de041f245bf9aca0638ca211202bfcc70902ad20170b" exitCode=1 Mar 18 13:23:43.104421 master-0 kubenswrapper[28504]: I0318 13:23:43.104361 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-c7nh9_0213214b-693b-411b-8254-48d7826011eb/openshift-config-operator/1.log" Mar 18 13:23:43.105245 master-0 kubenswrapper[28504]: I0318 13:23:43.105205 28504 generic.go:334] "Generic (PLEG): container finished" podID="0213214b-693b-411b-8254-48d7826011eb" containerID="661c3bb10e2521fe20a10e5fb07df9df3af85336e6dda238f88d912cc35e4a9f" exitCode=255 Mar 18 13:23:43.105245 master-0 kubenswrapper[28504]: I0318 13:23:43.105232 28504 generic.go:334] "Generic (PLEG): container finished" podID="0213214b-693b-411b-8254-48d7826011eb" containerID="5c89794c76d4515a3e7d3c02069fb4c61a25855d4eed6b9182b128d2ddf1520d" exitCode=0 Mar 18 13:23:43.106230 master-0 kubenswrapper[28504]: E0318 13:23:43.106179 28504 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:23:43.108497 master-0 kubenswrapper[28504]: I0318 13:23:43.108373 28504 generic.go:334] "Generic (PLEG): container finished" podID="4bc77989-ecfc-4500-92a0-18c2b3b78408" containerID="da555fd9f47f4294570e6ad25c16548ca14ae9ec137f334d01bde47cd422dcf9" exitCode=0 Mar 18 13:23:43.109944 master-0 kubenswrapper[28504]: I0318 13:23:43.109885 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_245f3af1-ccfb-4191-9a34-00852e52a73d/installer/0.log" Mar 18 13:23:43.110008 master-0 kubenswrapper[28504]: I0318 13:23:43.109928 28504 generic.go:334] "Generic (PLEG): container finished" podID="245f3af1-ccfb-4191-9a34-00852e52a73d" containerID="2590a481a145d76e2b7df7ede04cc027447c99a8ab51376b367af34e50c7be34" exitCode=1 Mar 18 13:23:43.111191 master-0 kubenswrapper[28504]: I0318 13:23:43.111152 28504 generic.go:334] "Generic (PLEG): container finished" podID="f4d88fc1-4e92-432e-ac2c-e1c489b15e93" containerID="3aecc1592a5c76f7851ff01bf9ec75d38c020718af10663c3a3924f329ae17c6" exitCode=0 Mar 18 13:23:43.113434 master-0 kubenswrapper[28504]: I0318 13:23:43.113381 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 13:23:43.115211 master-0 kubenswrapper[28504]: I0318 13:23:43.115117 28504 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="0cd8e6f707daec133a61d96c86c0549a9d428fdbed2e6e436300d058f6e9d875" exitCode=1 Mar 18 13:23:43.115211 master-0 kubenswrapper[28504]: I0318 13:23:43.115207 28504 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="869e4216741d6f450122345795f65e862d784b38e4a915e11371713c52cf93a3" exitCode=0 Mar 18 13:23:43.120982 master-0 kubenswrapper[28504]: I0318 13:23:43.120915 28504 generic.go:334] "Generic (PLEG): container finished" podID="65cfa12a-0711-4fba-8859-73a3f8f250a9" containerID="8a450d61a86ca02f43befd316491f266f23f5f89125343df32e08e9b38e85140" exitCode=0 Mar 18 13:23:43.122993 master-0 kubenswrapper[28504]: I0318 13:23:43.122951 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-wkw7f_1ad93612-ab12-4b30-984f-119e1b924a84/snapshot-controller/3.log" Mar 18 13:23:43.122993 master-0 kubenswrapper[28504]: I0318 13:23:43.122989 28504 generic.go:334] "Generic (PLEG): container finished" podID="1ad93612-ab12-4b30-984f-119e1b924a84" containerID="ddcbf11a00d3d2b2cc8dba953e8ea411de73bf086be68e4a972c789cfa038823" exitCode=1 Mar 18 13:23:43.125915 master-0 kubenswrapper[28504]: I0318 13:23:43.125848 28504 generic.go:334] "Generic (PLEG): container finished" podID="d2cf9274-25d2-4576-bbef-1d416dfff0a9" containerID="db73a77a31c8b1e864924b98296d985e4ebe8a8cec9a1770fc0976a7285d12ff" exitCode=0 Mar 18 13:23:43.125915 master-0 kubenswrapper[28504]: I0318 13:23:43.125901 28504 generic.go:334] "Generic (PLEG): container finished" podID="d2cf9274-25d2-4576-bbef-1d416dfff0a9" containerID="4e40f363b03daa87aca7cb71f28f83a28265ae86967a44b24bfca71c4bc0dc50" exitCode=0 Mar 18 13:23:43.141036 master-0 kubenswrapper[28504]: I0318 13:23:43.140909 28504 generic.go:334] "Generic (PLEG): container finished" podID="20dc979a-732b-43b5-acc2-118e4c350470" containerID="25dc4f55701fc072574e9fbf9afecda3f3ce7724cd8af5190b0641c9037070fb" exitCode=0 Mar 18 13:23:43.207040 master-0 kubenswrapper[28504]: E0318 13:23:43.206912 28504 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:23:43.320004 master-0 kubenswrapper[28504]: E0318 13:23:43.307055 28504 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:23:43.320004 master-0 kubenswrapper[28504]: I0318 13:23:43.308302 28504 manager.go:324] Recovery completed Mar 18 13:23:43.412010 master-0 kubenswrapper[28504]: E0318 13:23:43.407393 28504 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:23:43.449974 master-0 kubenswrapper[28504]: E0318 13:23:43.449017 28504 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 13:23:43.469115 master-0 kubenswrapper[28504]: I0318 13:23:43.466158 28504 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:23:43.473247 master-0 kubenswrapper[28504]: I0318 13:23:43.469844 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:23:43.473247 master-0 kubenswrapper[28504]: I0318 13:23:43.469896 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:23:43.473247 master-0 kubenswrapper[28504]: I0318 13:23:43.469907 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:23:43.484569 master-0 kubenswrapper[28504]: I0318 13:23:43.481111 28504 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 13:23:43.484569 master-0 kubenswrapper[28504]: I0318 13:23:43.481143 28504 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 13:23:43.484569 master-0 kubenswrapper[28504]: I0318 13:23:43.481182 28504 state_mem.go:36] "Initialized new in-memory state store" Mar 18 13:23:43.484569 master-0 kubenswrapper[28504]: I0318 13:23:43.481439 28504 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 18 13:23:43.484569 master-0 kubenswrapper[28504]: I0318 13:23:43.481452 28504 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 18 13:23:43.484569 master-0 kubenswrapper[28504]: I0318 13:23:43.481476 28504 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 18 13:23:43.484569 master-0 kubenswrapper[28504]: I0318 13:23:43.481484 28504 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 18 13:23:43.484569 master-0 kubenswrapper[28504]: I0318 13:23:43.481493 28504 policy_none.go:49] "None policy: Start" Mar 18 13:23:43.500120 master-0 kubenswrapper[28504]: I0318 13:23:43.499806 28504 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 13:23:43.500120 master-0 kubenswrapper[28504]: I0318 13:23:43.499880 28504 state_mem.go:35] "Initializing new in-memory state store" Mar 18 13:23:43.500867 master-0 kubenswrapper[28504]: I0318 13:23:43.500468 28504 state_mem.go:75] "Updated machine memory state" Mar 18 13:23:43.500867 master-0 kubenswrapper[28504]: I0318 13:23:43.500507 28504 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 18 13:23:43.510214 master-0 kubenswrapper[28504]: E0318 13:23:43.510023 28504 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:23:43.605339 master-0 kubenswrapper[28504]: I0318 13:23:43.602408 28504 manager.go:334] "Starting Device Plugin manager" Mar 18 13:23:43.605339 master-0 kubenswrapper[28504]: I0318 13:23:43.602491 28504 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 13:23:43.605339 master-0 kubenswrapper[28504]: I0318 13:23:43.602508 28504 server.go:79] "Starting device plugin registration server" Mar 18 13:23:43.605339 master-0 kubenswrapper[28504]: I0318 13:23:43.602987 28504 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 13:23:43.605339 master-0 kubenswrapper[28504]: I0318 13:23:43.603001 28504 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 13:23:43.611721 master-0 kubenswrapper[28504]: I0318 13:23:43.611683 28504 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 13:23:43.611875 master-0 kubenswrapper[28504]: I0318 13:23:43.611827 28504 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 13:23:43.611875 master-0 kubenswrapper[28504]: I0318 13:23:43.611838 28504 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 13:23:43.626567 master-0 kubenswrapper[28504]: E0318 13:23:43.625874 28504 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 13:23:43.708961 master-0 kubenswrapper[28504]: I0318 13:23:43.703189 28504 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:23:43.708961 master-0 kubenswrapper[28504]: I0318 13:23:43.706435 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:23:43.708961 master-0 kubenswrapper[28504]: I0318 13:23:43.706461 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:23:43.708961 master-0 kubenswrapper[28504]: I0318 13:23:43.706469 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:23:43.708961 master-0 kubenswrapper[28504]: I0318 13:23:43.706485 28504 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:23:44.250086 master-0 kubenswrapper[28504]: I0318 13:23:44.250001 28504 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:23:44.250309 master-0 kubenswrapper[28504]: I0318 13:23:44.250119 28504 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:23:44.252436 master-0 kubenswrapper[28504]: I0318 13:23:44.252371 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:23:44.252510 master-0 kubenswrapper[28504]: I0318 13:23:44.252447 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:23:44.252510 master-0 kubenswrapper[28504]: I0318 13:23:44.252460 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:23:44.252623 master-0 kubenswrapper[28504]: I0318 13:23:44.252566 28504 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:23:44.252770 master-0 kubenswrapper[28504]: I0318 13:23:44.252741 28504 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:23:44.255414 master-0 kubenswrapper[28504]: I0318 13:23:44.255369 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:23:44.255492 master-0 kubenswrapper[28504]: I0318 13:23:44.255425 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:23:44.255492 master-0 kubenswrapper[28504]: I0318 13:23:44.255439 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:23:44.255605 master-0 kubenswrapper[28504]: I0318 13:23:44.255584 28504 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:23:44.255681 master-0 kubenswrapper[28504]: I0318 13:23:44.255656 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:23:44.255729 master-0 kubenswrapper[28504]: I0318 13:23:44.255692 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:23:44.255729 master-0 kubenswrapper[28504]: I0318 13:23:44.255703 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:23:44.255729 master-0 kubenswrapper[28504]: I0318 13:23:44.255715 28504 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:23:44.257865 master-0 kubenswrapper[28504]: I0318 13:23:44.257836 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:23:44.257865 master-0 kubenswrapper[28504]: I0318 13:23:44.257867 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:23:44.258002 master-0 kubenswrapper[28504]: I0318 13:23:44.257878 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:23:44.258002 master-0 kubenswrapper[28504]: I0318 13:23:44.257975 28504 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:23:44.258192 master-0 kubenswrapper[28504]: I0318 13:23:44.258156 28504 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:23:44.260121 master-0 kubenswrapper[28504]: I0318 13:23:44.260088 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:23:44.260121 master-0 kubenswrapper[28504]: I0318 13:23:44.260119 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:23:44.260260 master-0 kubenswrapper[28504]: I0318 13:23:44.260130 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:23:44.260260 master-0 kubenswrapper[28504]: I0318 13:23:44.260128 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:23:44.260260 master-0 kubenswrapper[28504]: I0318 13:23:44.260162 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:23:44.260260 master-0 kubenswrapper[28504]: I0318 13:23:44.260176 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:23:44.260260 master-0 kubenswrapper[28504]: I0318 13:23:44.260226 28504 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:23:44.260491 master-0 kubenswrapper[28504]: I0318 13:23:44.260451 28504 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:23:44.260723 master-0 kubenswrapper[28504]: I0318 13:23:44.260701 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:23:44.260723 master-0 kubenswrapper[28504]: I0318 13:23:44.260722 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:23:44.260801 master-0 kubenswrapper[28504]: I0318 13:23:44.260732 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:23:44.281961 master-0 kubenswrapper[28504]: I0318 13:23:44.277328 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:23:44.281961 master-0 kubenswrapper[28504]: I0318 13:23:44.277452 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:23:44.281961 master-0 kubenswrapper[28504]: I0318 13:23:44.277466 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:23:44.281961 master-0 kubenswrapper[28504]: I0318 13:23:44.277652 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:23:44.281961 master-0 kubenswrapper[28504]: I0318 13:23:44.277804 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:23:44.281961 master-0 kubenswrapper[28504]: I0318 13:23:44.277829 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:23:44.281961 master-0 kubenswrapper[28504]: I0318 13:23:44.278339 28504 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:23:44.281961 master-0 kubenswrapper[28504]: I0318 13:23:44.281240 28504 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.286571 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.286612 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.286623 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.286826 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="161cd3706961d9a83285893d6e92c33d138d2abea441f91387021bb04fef5a38" Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.286925 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"b6b74e1434af586928d0d97de2097dc7d5af0debabf7fc72ae9441fd8215f19c"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.286998 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"476397230dd5265b109c88cb9895dcb2331c878aa0e952499f1e99bacdfb7c70"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287010 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"f24e2a620b5b7fcf0061b5eed63562935874a9516125dafe2e71d357a479bb90"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287020 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerDied","Data":"6d83f8447a991a30e2932285a9ad9391e4be4f81c9b4bec0c838fb37dccbbcda"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287032 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"9cdb7659f9e5befc4b423f8f01e97091301553ed5776dec5e04ebf95f793c39d"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287054 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a577631cf83d4d696a51ef5800c1380f23cc2dfd5a5c79567b96e2414f25b3b1" Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287090 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e47f97eb0a0cc5aac7e96e57325228c9","Type":"ContainerStarted","Data":"87b86f2af8e501ae34658be585500655faa626562bf4927f068e08991f40d160"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287103 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e47f97eb0a0cc5aac7e96e57325228c9","Type":"ContainerStarted","Data":"e140efc28fb74fa94c1d843a6f6a44466dcb4914a6c8eada7179bb0663b14c56"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287114 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e47f97eb0a0cc5aac7e96e57325228c9","Type":"ContainerStarted","Data":"b9ab4da2bf00eddad01601b81bba9f16f6744134ee63b0910cd8e62f9b4a3e0d"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287124 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e47f97eb0a0cc5aac7e96e57325228c9","Type":"ContainerStarted","Data":"4f190a1e5cc84fa7af8fb29dad5d8ad4c967b2e4627e9634fba3c046d5f350df"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287135 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e47f97eb0a0cc5aac7e96e57325228c9","Type":"ContainerStarted","Data":"37139cfc3c201b83f82c4c778e201e9e4fa5f476ed738dc1d77b51b256fa3f72"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287166 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="792ac459c58c5c5c87f43812b7188a5914dfddf16111da68a4a9f5f5502a61fc" Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287199 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"7c596290b16b735fd2873e580d133696dfedc347b3f8e0e91a59ac0b73f33ad7"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287211 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"49ae7d75896e0c278ca4b4cb9c4f8b076e025d8e605f566c7b21c0b8fb8bc3f7"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287232 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b30879eb2b02b10f5626375185ef5a50b2b5911613002b67fd621b1c5c99680" Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287254 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abd6d9cd064ffc49598289235ab6b846f24e69f6bc0b898e367dc9ec6a8b35e1" Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287271 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"377a83a18fc12c3c2467c48fc4fffce0e3c49653a95ea0cf4d2345011a573703"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287282 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"2b80d49adc9d2f83dbbd9220b14172f3ab4f4c6ab2a4e2ce774916e196c3bafb"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287292 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerDied","Data":"e8c91df65ad2a5d542d44c3e9b72528a23b6488107d0629d4bcdb9c771675658"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287306 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"9a15447edfc940cd5b4dde2df7e8e6360f5b93278864866c14e686e33bd8d32a"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287343 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce97760530466dc4fab04d92ea3320ac86069f6a538466695591a4fec01d17ee" Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287358 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"848877c2f6d1ef07f17e6c1264b87f7b953b932fd22f35ae6b8c6b811221f114"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287369 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"d4f41e4ce4d9d6e7de6cfd4a3e8227b63acd5c4e76d0cf03caa6732417545af9"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287381 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"f81aec172d3fd5ba7f1a996a5480892d078f8a7bb1def93bddd40cd1c81466ab"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287394 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"00152beaf4ef9173dcc6f816e81c474dc52f514b563ec7779b209fc77ec8bb11"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287404 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"6f05f6747a6514adc5cde513919c3bdc29ffb6ed0ade2f6a425c19a551bb4a8c"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287417 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"e371dab0b58bcbcc5b1907ef685fdfadda0906d8d24523dfbc948bf72419b864"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287430 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"f2eaa8545a70bd93c6fda5c0d0d68dc69b5076035140e52196a502a53a980e02"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287443 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"200e8cc7b998c12ebab49945348ad20ad11d9b022c6433d242aed2cda0e0a774"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287455 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"107bbfc4822b298b178da7c2027a8844c3612176c3e5d6fcb31db24eadcd1790"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287469 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b06475f72c4aa178a3711e3bf8a803b73ed7bca27bffed7ac62aefe98506c3d" Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287515 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d931af2c5d54a586a9cb21f694a9dbf73198cb23716b2134948c1a2dbbd5bc6" Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287540 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b5f4bb0323e76ef7cd02a1d41797e05db5442b3a066933557b53fceaffa8ab5" Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287552 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c416409750419b3738641dbf762d8e4ba531250589956be62e2ee0593e39b8a" Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287560 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"4bd355c34f8aa8d889ca1a40b947fb34311faee6233b1e449a1cc61917522f5b"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287572 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"0cd8e6f707daec133a61d96c86c0549a9d428fdbed2e6e436300d058f6e9d875"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287584 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"869e4216741d6f450122345795f65e862d784b38e4a915e11371713c52cf93a3"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287595 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"80f499fc9da9c64215ee87b646e32e10113a3a485956cfb50964722faffe7405"} Mar 18 13:23:44.290959 master-0 kubenswrapper[28504]: I0318 13:23:44.287756 28504 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:23:44.292241 master-0 kubenswrapper[28504]: I0318 13:23:44.291106 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:23:44.292241 master-0 kubenswrapper[28504]: I0318 13:23:44.291176 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:23:44.292241 master-0 kubenswrapper[28504]: I0318 13:23:44.291178 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:23:44.292241 master-0 kubenswrapper[28504]: I0318 13:23:44.291219 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:23:44.292241 master-0 kubenswrapper[28504]: I0318 13:23:44.291187 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:23:44.292241 master-0 kubenswrapper[28504]: I0318 13:23:44.291230 28504 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:23:46.016141 master-0 kubenswrapper[28504]: E0318 13:23:46.016069 28504 resource_metrics.go:161] "Error getting summary for resourceMetric prometheus endpoint" err="failed to get node info: node \"master-0\" not found" Mar 18 13:23:48.622169 master-0 kubenswrapper[28504]: I0318 13:23:48.622003 28504 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 13:23:48.622730 master-0 kubenswrapper[28504]: I0318 13:23:48.622266 28504 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 13:23:48.623278 master-0 kubenswrapper[28504]: I0318 13:23:48.623229 28504 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 13:23:48.630320 master-0 kubenswrapper[28504]: I0318 13:23:48.630266 28504 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 13:23:48.633523 master-0 kubenswrapper[28504]: I0318 13:23:48.633473 28504 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 18 13:23:48.690129 master-0 kubenswrapper[28504]: I0318 13:23:48.690056 28504 apiserver.go:52] "Watching apiserver" Mar 18 13:23:48.714993 master-0 kubenswrapper[28504]: I0318 13:23:48.714954 28504 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 13:23:48.719777 master-0 kubenswrapper[28504]: I0318 13:23:48.719491 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn","openshift-dns-operator/dns-operator-9c5679d8f-bqbzx","openshift-kube-apiserver/installer-3-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf","openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5","openshift-marketplace/certified-operators-d7pj2","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz","openshift-service-ca/service-ca-79bc6b8d76-855bx","openshift-monitoring/kube-state-metrics-7bbc969446-dldw9","openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w","openshift-kube-scheduler/installer-5-retry-1-master-0","openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh","openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc","openshift-kube-apiserver/installer-1-master-0","openshift-kube-scheduler/installer-4-master-0","openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd","openshift-machine-config-operator/machine-config-server-4f5s4","openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz","openshift-dns/node-resolver-slqms","openshift-kube-scheduler/installer-5-master-0","openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5","openshift-monitoring/metrics-server-648866dd9c-ztkrd","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56","openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42","openshift-ovn-kubernetes/ovnkube-node-pfs29","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz","openshift-kube-controller-manager/installer-2-master-0","openshift-marketplace/marketplace-operator-89ccd998f-4v84b","openshift-multus/multus-9bhww","openshift-multus/multus-additional-cni-plugins-xpppb","openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw","openshift-network-diagnostics/network-check-target-zlgkc","openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5","openshift-etcd/installer-2-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-multus/network-metrics-daemon-kq2j4","openshift-network-operator/network-operator-7bd846bfc4-mk4d5","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-dns/dns-default-wl929","openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s","openshift-marketplace/redhat-marketplace-p546b","openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9","openshift-ingress/router-default-7dcf5569b5-mtnzv","openshift-monitoring/node-exporter-f55c6","openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2","openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm","assisted-installer/assisted-installer-controller-m2vzq","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l","openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8","openshift-network-node-identity/network-node-identity-xcbtb","openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z","openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr","openshift-apiserver/apiserver-574f6d5bf6-8krhk","openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f","openshift-machine-config-operator/machine-config-daemon-2qjl7","openshift-marketplace/redhat-operators-459lq","openshift-network-diagnostics/network-check-source-b4bf74f6-qnwtb","openshift-network-operator/iptables-alerter-tvnss","openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2","openshift-etcd/installer-1-master-0","openshift-insights/insights-operator-68bf6ff9d6-ckwz8","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg","openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt","openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c","openshift-etcd/etcd-master-0","openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9","openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq","openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-storage-version-migrator/migrator-8487694857-vf6mv","openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-controller-manager/controller-manager-66b7876dbc-rdzrh","openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb","openshift-kube-controller-manager/installer-3-master-0","openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v","openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm","openshift-cluster-node-tuning-operator/tuned-rlp78","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8","openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk","openshift-marketplace/community-operators-nhwvw","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl"] Mar 18 13:23:48.722536 master-0 kubenswrapper[28504]: I0318 13:23:48.722475 28504 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="bd99bb9c-615b-4ddc-8849-489954612633" Mar 18 13:23:48.726897 master-0 kubenswrapper[28504]: I0318 13:23:48.726825 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-m2vzq" Mar 18 13:23:48.727589 master-0 kubenswrapper[28504]: I0318 13:23:48.727558 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 13:23:48.731762 master-0 kubenswrapper[28504]: I0318 13:23:48.731704 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:23:48.731923 master-0 kubenswrapper[28504]: I0318 13:23:48.731880 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:23:48.732399 master-0 kubenswrapper[28504]: I0318 13:23:48.732355 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:23:48.735412 master-0 kubenswrapper[28504]: I0318 13:23:48.734314 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 13:23:48.735412 master-0 kubenswrapper[28504]: I0318 13:23:48.735078 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 13:23:48.736878 master-0 kubenswrapper[28504]: I0318 13:23:48.736830 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 13:23:48.737327 master-0 kubenswrapper[28504]: I0318 13:23:48.737300 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 13:23:48.737876 master-0 kubenswrapper[28504]: I0318 13:23:48.737832 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 13:23:48.737876 master-0 kubenswrapper[28504]: I0318 13:23:48.737852 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 13:23:48.738025 master-0 kubenswrapper[28504]: I0318 13:23:48.737878 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 13:23:48.738025 master-0 kubenswrapper[28504]: I0318 13:23:48.737987 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 13:23:48.738109 master-0 kubenswrapper[28504]: I0318 13:23:48.738073 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 13:23:48.738336 master-0 kubenswrapper[28504]: I0318 13:23:48.738290 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 13:23:48.738572 master-0 kubenswrapper[28504]: I0318 13:23:48.738531 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 13:23:48.738885 master-0 kubenswrapper[28504]: I0318 13:23:48.738576 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 13:23:48.739368 master-0 kubenswrapper[28504]: I0318 13:23:48.739346 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 13:23:48.743303 master-0 kubenswrapper[28504]: I0318 13:23:48.743260 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 13:23:48.744314 master-0 kubenswrapper[28504]: I0318 13:23:48.744280 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 13:23:48.744476 master-0 kubenswrapper[28504]: I0318 13:23:48.744442 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:48.744542 master-0 kubenswrapper[28504]: I0318 13:23:48.744484 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:48.744542 master-0 kubenswrapper[28504]: I0318 13:23:48.744510 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.744542 master-0 kubenswrapper[28504]: I0318 13:23:48.744535 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:48.744701 master-0 kubenswrapper[28504]: I0318 13:23:48.744560 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:48.744701 master-0 kubenswrapper[28504]: I0318 13:23:48.744580 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:48.744701 master-0 kubenswrapper[28504]: I0318 13:23:48.744600 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:48.744701 master-0 kubenswrapper[28504]: I0318 13:23:48.744619 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e47f97eb0a0cc5aac7e96e57325228c9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:23:48.744701 master-0 kubenswrapper[28504]: I0318 13:23:48.744641 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:23:48.744701 master-0 kubenswrapper[28504]: I0318 13:23:48.744662 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.744701 master-0 kubenswrapper[28504]: I0318 13:23:48.744681 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:48.745015 master-0 kubenswrapper[28504]: I0318 13:23:48.744704 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:23:48.745015 master-0 kubenswrapper[28504]: I0318 13:23:48.744880 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:23:48.745208 master-0 kubenswrapper[28504]: I0318 13:23:48.745070 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 13:23:48.745208 master-0 kubenswrapper[28504]: I0318 13:23:48.745113 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 13:23:48.745208 master-0 kubenswrapper[28504]: I0318 13:23:48.745154 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 13:23:48.745374 master-0 kubenswrapper[28504]: I0318 13:23:48.745235 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 13:23:48.745374 master-0 kubenswrapper[28504]: I0318 13:23:48.745315 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 13:23:48.745374 master-0 kubenswrapper[28504]: I0318 13:23:48.745344 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 13:23:48.745503 master-0 kubenswrapper[28504]: I0318 13:23:48.745482 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 13:23:48.745790 master-0 kubenswrapper[28504]: I0318 13:23:48.745631 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 13:23:48.745790 master-0 kubenswrapper[28504]: I0318 13:23:48.745779 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 13:23:48.746037 master-0 kubenswrapper[28504]: I0318 13:23:48.746014 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 13:23:48.746159 master-0 kubenswrapper[28504]: I0318 13:23:48.746127 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 13:23:48.746274 master-0 kubenswrapper[28504]: I0318 13:23:48.746248 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 13:23:48.746415 master-0 kubenswrapper[28504]: I0318 13:23:48.746365 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 13:23:48.752319 master-0 kubenswrapper[28504]: I0318 13:23:48.747635 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:23:48.752319 master-0 kubenswrapper[28504]: I0318 13:23:48.747673 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:23:48.752319 master-0 kubenswrapper[28504]: I0318 13:23:48.747701 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.752319 master-0 kubenswrapper[28504]: I0318 13:23:48.747740 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.752319 master-0 kubenswrapper[28504]: I0318 13:23:48.747766 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:48.752319 master-0 kubenswrapper[28504]: I0318 13:23:48.747791 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e47f97eb0a0cc5aac7e96e57325228c9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:23:48.752319 master-0 kubenswrapper[28504]: I0318 13:23:48.747811 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.752319 master-0 kubenswrapper[28504]: I0318 13:23:48.747831 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.752319 master-0 kubenswrapper[28504]: I0318 13:23:48.751638 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 13:23:48.752319 master-0 kubenswrapper[28504]: I0318 13:23:48.751787 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 13:23:48.752319 master-0 kubenswrapper[28504]: I0318 13:23:48.752003 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 13:23:48.752319 master-0 kubenswrapper[28504]: I0318 13:23:48.752003 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-579bw" Mar 18 13:23:48.752319 master-0 kubenswrapper[28504]: I0318 13:23:48.752038 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 13:23:48.752319 master-0 kubenswrapper[28504]: I0318 13:23:48.752189 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 13:23:48.752319 master-0 kubenswrapper[28504]: I0318 13:23:48.752293 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 13:23:48.753029 master-0 kubenswrapper[28504]: I0318 13:23:48.752397 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 13:23:48.753029 master-0 kubenswrapper[28504]: I0318 13:23:48.752559 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 13:23:48.753029 master-0 kubenswrapper[28504]: I0318 13:23:48.752659 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 13:23:48.753029 master-0 kubenswrapper[28504]: I0318 13:23:48.752789 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 13:23:48.753029 master-0 kubenswrapper[28504]: I0318 13:23:48.752880 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 13:23:48.753029 master-0 kubenswrapper[28504]: I0318 13:23:48.753013 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 13:23:48.753259 master-0 kubenswrapper[28504]: I0318 13:23:48.753226 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 13:23:48.753376 master-0 kubenswrapper[28504]: I0318 13:23:48.753354 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 13:23:48.753570 master-0 kubenswrapper[28504]: I0318 13:23:48.753541 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 13:23:48.753570 master-0 kubenswrapper[28504]: I0318 13:23:48.753564 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 13:23:48.753831 master-0 kubenswrapper[28504]: I0318 13:23:48.753810 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 13:23:48.753900 master-0 kubenswrapper[28504]: I0318 13:23:48.753863 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 13:23:48.754006 master-0 kubenswrapper[28504]: I0318 13:23:48.753985 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 13:23:48.754087 master-0 kubenswrapper[28504]: I0318 13:23:48.754067 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 13:23:48.754147 master-0 kubenswrapper[28504]: I0318 13:23:48.754108 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 13:23:48.754195 master-0 kubenswrapper[28504]: I0318 13:23:48.754179 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 13:23:48.754195 master-0 kubenswrapper[28504]: I0318 13:23:48.754186 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 13:23:48.754282 master-0 kubenswrapper[28504]: I0318 13:23:48.754218 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 13:23:48.754399 master-0 kubenswrapper[28504]: I0318 13:23:48.754376 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 13:23:48.754484 master-0 kubenswrapper[28504]: I0318 13:23:48.754464 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 13:23:48.754614 master-0 kubenswrapper[28504]: I0318 13:23:48.754593 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 13:23:48.754733 master-0 kubenswrapper[28504]: I0318 13:23:48.754711 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 13:23:48.754791 master-0 kubenswrapper[28504]: I0318 13:23:48.754731 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 13:23:48.754791 master-0 kubenswrapper[28504]: I0318 13:23:48.754776 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 13:23:48.754879 master-0 kubenswrapper[28504]: I0318 13:23:48.754847 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 13:23:48.755029 master-0 kubenswrapper[28504]: I0318 13:23:48.755007 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 13:23:48.755122 master-0 kubenswrapper[28504]: I0318 13:23:48.755074 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 13:23:48.755175 master-0 kubenswrapper[28504]: I0318 13:23:48.755010 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 13:23:48.755175 master-0 kubenswrapper[28504]: I0318 13:23:48.755161 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 13:23:48.786422 master-0 kubenswrapper[28504]: I0318 13:23:48.779021 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:23:48.786422 master-0 kubenswrapper[28504]: I0318 13:23:48.783209 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 13:23:48.786422 master-0 kubenswrapper[28504]: I0318 13:23:48.784299 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 13:23:48.786422 master-0 kubenswrapper[28504]: I0318 13:23:48.784529 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 13:23:48.786422 master-0 kubenswrapper[28504]: I0318 13:23:48.784864 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 13:23:48.786422 master-0 kubenswrapper[28504]: I0318 13:23:48.785035 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:23:48.786422 master-0 kubenswrapper[28504]: I0318 13:23:48.785198 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 13:23:48.786422 master-0 kubenswrapper[28504]: I0318 13:23:48.785305 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 13:23:48.786422 master-0 kubenswrapper[28504]: I0318 13:23:48.785402 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 13:23:48.786422 master-0 kubenswrapper[28504]: I0318 13:23:48.785503 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:23:48.786422 master-0 kubenswrapper[28504]: I0318 13:23:48.785595 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 13:23:48.786422 master-0 kubenswrapper[28504]: I0318 13:23:48.785749 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 13:23:48.798288 master-0 kubenswrapper[28504]: I0318 13:23:48.797985 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 13:23:48.798642 master-0 kubenswrapper[28504]: I0318 13:23:48.798617 28504 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 18 13:23:48.798722 master-0 kubenswrapper[28504]: I0318 13:23:48.798703 28504 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 13:23:48.798890 master-0 kubenswrapper[28504]: I0318 13:23:48.798866 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 13:23:48.799043 master-0 kubenswrapper[28504]: I0318 13:23:48.799018 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 13:23:48.799407 master-0 kubenswrapper[28504]: I0318 13:23:48.799374 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 13:23:48.800034 master-0 kubenswrapper[28504]: I0318 13:23:48.799995 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 13:23:48.807248 master-0 kubenswrapper[28504]: I0318 13:23:48.807199 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 13:23:48.807569 master-0 kubenswrapper[28504]: I0318 13:23:48.807544 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 13:23:48.807759 master-0 kubenswrapper[28504]: I0318 13:23:48.807718 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 13:23:48.807843 master-0 kubenswrapper[28504]: I0318 13:23:48.807771 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 13:23:48.810133 master-0 kubenswrapper[28504]: I0318 13:23:48.810112 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 13:23:48.814317 master-0 kubenswrapper[28504]: I0318 13:23:48.814253 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 13:23:48.814500 master-0 kubenswrapper[28504]: I0318 13:23:48.814455 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:23:48.814878 master-0 kubenswrapper[28504]: I0318 13:23:48.814846 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 13:23:48.815198 master-0 kubenswrapper[28504]: I0318 13:23:48.815153 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 13:23:48.815424 master-0 kubenswrapper[28504]: I0318 13:23:48.815370 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 13:23:48.815491 master-0 kubenswrapper[28504]: I0318 13:23:48.815375 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 13:23:48.815533 master-0 kubenswrapper[28504]: I0318 13:23:48.815474 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 13:23:48.815692 master-0 kubenswrapper[28504]: I0318 13:23:48.815662 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 13:23:48.815890 master-0 kubenswrapper[28504]: I0318 13:23:48.815851 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 13:23:48.816134 master-0 kubenswrapper[28504]: I0318 13:23:48.816103 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 18 13:23:48.816192 master-0 kubenswrapper[28504]: I0318 13:23:48.816140 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 13:23:48.816825 master-0 kubenswrapper[28504]: I0318 13:23:48.816795 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 13:23:48.821216 master-0 kubenswrapper[28504]: I0318 13:23:48.821182 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 13:23:48.823117 master-0 kubenswrapper[28504]: I0318 13:23:48.823062 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 13:23:48.823117 master-0 kubenswrapper[28504]: I0318 13:23:48.823106 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 13:23:48.823583 master-0 kubenswrapper[28504]: I0318 13:23:48.823550 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:48.831980 master-0 kubenswrapper[28504]: I0318 13:23:48.831929 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 13:23:48.832573 master-0 kubenswrapper[28504]: I0318 13:23:48.832551 28504 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 18 13:23:48.841580 master-0 kubenswrapper[28504]: I0318 13:23:48.841371 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 13:23:48.849054 master-0 kubenswrapper[28504]: I0318 13:23:48.848873 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:23:48.849054 master-0 kubenswrapper[28504]: I0318 13:23:48.848991 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-node-log\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.849054 master-0 kubenswrapper[28504]: I0318 13:23:48.849049 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-images\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:23:48.849321 master-0 kubenswrapper[28504]: I0318 13:23:48.849076 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/375d5112-d2be-47cf-bee1-82614ba71ed8-webhook-cert\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:23:48.849321 master-0 kubenswrapper[28504]: I0318 13:23:48.849126 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:23:48.849321 master-0 kubenswrapper[28504]: I0318 13:23:48.849149 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8v5n\" (UniqueName: \"kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-kube-api-access-h8v5n\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:23:48.849321 master-0 kubenswrapper[28504]: I0318 13:23:48.849197 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-client-ca\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:23:48.849321 master-0 kubenswrapper[28504]: I0318 13:23:48.849214 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:23:48.849321 master-0 kubenswrapper[28504]: I0318 13:23:48.849231 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/369e9689-e2f6-4276-b096-8db094f8d6ae-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:23:48.849321 master-0 kubenswrapper[28504]: I0318 13:23:48.849280 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-images\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:23:48.849321 master-0 kubenswrapper[28504]: I0318 13:23:48.849308 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brvlj\" (UniqueName: \"kubernetes.io/projected/4bc77989-ecfc-4500-92a0-18c2b3b78408-kube-api-access-brvlj\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:23:48.849564 master-0 kubenswrapper[28504]: I0318 13:23:48.849330 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crbvx\" (UniqueName: \"kubernetes.io/projected/369e9689-e2f6-4276-b096-8db094f8d6ae-kube-api-access-crbvx\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:23:48.849564 master-0 kubenswrapper[28504]: I0318 13:23:48.849381 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e0fa133-60e7-47d0-996e-7e85aef2a218-utilities\") pod \"redhat-marketplace-p546b\" (UID: \"2e0fa133-60e7-47d0-996e-7e85aef2a218\") " pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:23:48.849564 master-0 kubenswrapper[28504]: I0318 13:23:48.849405 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/baeb6380-95e4-4e10-9798-e1e22f20bade-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:23:48.849564 master-0 kubenswrapper[28504]: I0318 13:23:48.849454 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4671673d-afa0-481f-b3a2-2c2b9441b6ce-metrics-tls\") pod \"dns-default-wl929\" (UID: \"4671673d-afa0-481f-b3a2-2c2b9441b6ce\") " pod="openshift-dns/dns-default-wl929" Mar 18 13:23:48.849564 master-0 kubenswrapper[28504]: I0318 13:23:48.849495 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-etcd-serving-ca\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:48.849776 master-0 kubenswrapper[28504]: I0318 13:23:48.849576 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e47f97eb0a0cc5aac7e96e57325228c9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:23:48.849776 master-0 kubenswrapper[28504]: I0318 13:23:48.849640 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhqk9\" (UniqueName: \"kubernetes.io/projected/da6a763d-2777-40c4-ae1f-c77ced406ea2-kube-api-access-lhqk9\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:23:48.849776 master-0 kubenswrapper[28504]: I0318 13:23:48.849691 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-9nw6w\" (UID: \"7fa6920b-f7d9-4758-bba9-356a2c8b1b67\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:23:48.849776 master-0 kubenswrapper[28504]: I0318 13:23:48.849718 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-audit-dir\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:48.849776 master-0 kubenswrapper[28504]: I0318 13:23:48.849766 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-auth-proxy-config\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:23:48.849916 master-0 kubenswrapper[28504]: I0318 13:23:48.849795 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:23:48.849916 master-0 kubenswrapper[28504]: I0318 13:23:48.849843 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-modprobe-d\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.849916 master-0 kubenswrapper[28504]: I0318 13:23:48.849871 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-netns\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.850031 master-0 kubenswrapper[28504]: I0318 13:23:48.850000 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-etc-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.850031 master-0 kubenswrapper[28504]: I0318 13:23:48.850024 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:23:48.850332 master-0 kubenswrapper[28504]: I0318 13:23:48.850046 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gv8b\" (UniqueName: \"kubernetes.io/projected/16a930da-d793-486f-bcef-cf042d3c427d-kube-api-access-5gv8b\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:23:48.850332 master-0 kubenswrapper[28504]: I0318 13:23:48.850113 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c074751c-6b3c-44df-aca5-42fa69662779-serving-cert\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:23:48.850332 master-0 kubenswrapper[28504]: I0318 13:23:48.850194 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-k8s-cni-cncf-io\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.850332 master-0 kubenswrapper[28504]: I0318 13:23:48.850209 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-host\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.850332 master-0 kubenswrapper[28504]: I0318 13:23:48.850276 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcfq7\" (UniqueName: \"kubernetes.io/projected/7e309570-09d0-412a-a74b-c5397d048a30-kube-api-access-mcfq7\") pod \"cluster-samples-operator-85f7577d78-jjdvw\" (UID: \"7e309570-09d0-412a-a74b-c5397d048a30\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw" Mar 18 13:23:48.851563 master-0 kubenswrapper[28504]: I0318 13:23:48.850292 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-os-release\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.851624 master-0 kubenswrapper[28504]: I0318 13:23:48.851572 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6d7j\" (UniqueName: \"kubernetes.io/projected/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-kube-api-access-q6d7j\") pod \"redhat-operators-459lq\" (UID: \"35d8f08f-4c57-44e0-8e8f-3969287e2a5a\") " pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:23:48.851657 master-0 kubenswrapper[28504]: I0318 13:23:48.851618 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hthf8\" (UniqueName: \"kubernetes.io/projected/f3c106be-27ea-4849-b365-eff6d25f5e71-kube-api-access-hthf8\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:23:48.851689 master-0 kubenswrapper[28504]: I0318 13:23:48.851660 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-var-lib-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.851723 master-0 kubenswrapper[28504]: I0318 13:23:48.851700 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-cert\") pod \"cluster-autoscaler-operator-866dc4744-q8vxr\" (UID: \"bd033b5b-af07-4e69-9a5c-46f7c9bde95a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:23:48.851723 master-0 kubenswrapper[28504]: I0318 13:23:48.851718 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ea3c78-dede-468f-89a5-551133f794c5-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:23:48.852023 master-0 kubenswrapper[28504]: I0318 13:23:48.851994 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-conf-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.852076 master-0 kubenswrapper[28504]: I0318 13:23:48.852033 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpl28\" (UniqueName: \"kubernetes.io/projected/5e691486-8540-4b79-8eed-b0fb829071db-kube-api-access-lpl28\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:23:48.852109 master-0 kubenswrapper[28504]: I0318 13:23:48.852097 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2cf9274-25d2-4576-bbef-1d416dfff0a9-catalog-content\") pod \"certified-operators-d7pj2\" (UID: \"d2cf9274-25d2-4576-bbef-1d416dfff0a9\") " pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:23:48.852143 master-0 kubenswrapper[28504]: I0318 13:23:48.852119 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:23:48.852174 master-0 kubenswrapper[28504]: I0318 13:23:48.852158 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-encryption-config\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:48.852212 master-0 kubenswrapper[28504]: I0318 13:23:48.852181 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4lx2\" (UniqueName: \"kubernetes.io/projected/4086d06f-d50e-4632-9da7-508909429eef-kube-api-access-w4lx2\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.852212 master-0 kubenswrapper[28504]: I0318 13:23:48.852198 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/16a930da-d793-486f-bcef-cf042d3c427d-operand-assets\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:23:48.852280 master-0 kubenswrapper[28504]: I0318 13:23:48.852243 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-system-cni-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.852280 master-0 kubenswrapper[28504]: I0318 13:23:48.852274 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2c4572e-0b38-4db1-96e5-6a35e29048e7-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:23:48.852340 master-0 kubenswrapper[28504]: I0318 13:23:48.852327 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-882b8\" (UniqueName: \"kubernetes.io/projected/8a0944d2-d99a-42eb-81f5-a212b750b8f4-kube-api-access-882b8\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:23:48.852371 master-0 kubenswrapper[28504]: I0318 13:23:48.852353 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3a039fc2-b0af-4b2c-a884-1c274c08064d-signing-cabundle\") pod \"service-ca-79bc6b8d76-855bx\" (UID: \"3a039fc2-b0af-4b2c-a884-1c274c08064d\") " pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" Mar 18 13:23:48.852401 master-0 kubenswrapper[28504]: I0318 13:23:48.852371 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-image-import-ca\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:48.852434 master-0 kubenswrapper[28504]: I0318 13:23:48.852412 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-config\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:23:48.852464 master-0 kubenswrapper[28504]: I0318 13:23:48.852432 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.852492 master-0 kubenswrapper[28504]: I0318 13:23:48.852474 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:23:48.852522 master-0 kubenswrapper[28504]: I0318 13:23:48.852494 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-config\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:23:48.852522 master-0 kubenswrapper[28504]: I0318 13:23:48.852512 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nllws\" (UniqueName: \"kubernetes.io/projected/317a89ea-e9dd-4167-8568-bb36e2431015-kube-api-access-nllws\") pod \"community-operators-nhwvw\" (UID: \"317a89ea-e9dd-4167-8568-bb36e2431015\") " pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:23:48.852580 master-0 kubenswrapper[28504]: I0318 13:23:48.852527 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-config\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:23:48.852580 master-0 kubenswrapper[28504]: I0318 13:23:48.852564 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8a0944d2-d99a-42eb-81f5-a212b750b8f4-host-etc-kube\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:23:48.852633 master-0 kubenswrapper[28504]: I0318 13:23:48.852587 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-encryption-config\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:48.852633 master-0 kubenswrapper[28504]: I0318 13:23:48.852607 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/375d5112-d2be-47cf-bee1-82614ba71ed8-tmpfs\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:23:48.852690 master-0 kubenswrapper[28504]: I0318 13:23:48.852643 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-script-lib\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.852690 master-0 kubenswrapper[28504]: I0318 13:23:48.852662 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/234a5a6c-3790-49d0-b1e7-86f81048d96a-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:23:48.852690 master-0 kubenswrapper[28504]: I0318 13:23:48.852684 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f3c106be-27ea-4849-b365-eff6d25f5e71-rootfs\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:23:48.852773 master-0 kubenswrapper[28504]: I0318 13:23:48.852723 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-default-certificate\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:23:48.852773 master-0 kubenswrapper[28504]: I0318 13:23:48.852742 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-iptables-alerter-script\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:23:48.852773 master-0 kubenswrapper[28504]: I0318 13:23:48.852758 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:23:48.852862 master-0 kubenswrapper[28504]: I0318 13:23:48.852793 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83a4f641-d28f-42aa-a228-f6086d720fe4-config\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:23:48.852862 master-0 kubenswrapper[28504]: I0318 13:23:48.852812 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:23:48.852862 master-0 kubenswrapper[28504]: I0318 13:23:48.852830 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8dfw\" (UniqueName: \"kubernetes.io/projected/8ce8e99d-7b02-4bf4-a438-adde851918cb-kube-api-access-r8dfw\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:23:48.852967 master-0 kubenswrapper[28504]: I0318 13:23:48.852847 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzldt\" (UniqueName: \"kubernetes.io/projected/1ad93612-ab12-4b30-984f-119e1b924a84-kube-api-access-xzldt\") pod \"csi-snapshot-controller-64854d9cff-wkw7f\" (UID: \"1ad93612-ab12-4b30-984f-119e1b924a84\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" Mar 18 13:23:48.852967 master-0 kubenswrapper[28504]: I0318 13:23:48.852890 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5mgr\" (UniqueName: \"kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-kube-api-access-j5mgr\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:23:48.852967 master-0 kubenswrapper[28504]: I0318 13:23:48.852908 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-config\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:23:48.852967 master-0 kubenswrapper[28504]: I0318 13:23:48.852923 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-config\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:48.852967 master-0 kubenswrapper[28504]: I0318 13:23:48.852964 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:23:48.853108 master-0 kubenswrapper[28504]: I0318 13:23:48.852982 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2385db6b-4286-4839-822c-aa9c52290172-proxy-tls\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:23:48.853108 master-0 kubenswrapper[28504]: I0318 13:23:48.853006 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b6rn\" (UniqueName: \"kubernetes.io/projected/5bccf60c-5b07-4f40-8430-12bfb62661c7-kube-api-access-4b6rn\") pod \"csi-snapshot-controller-operator-5f5d689c6b-4s6b8\" (UID: \"5bccf60c-5b07-4f40-8430-12bfb62661c7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8" Mar 18 13:23:48.853108 master-0 kubenswrapper[28504]: I0318 13:23:48.853059 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hb2q\" (UniqueName: \"kubernetes.io/projected/83a4f641-d28f-42aa-a228-f6086d720fe4-kube-api-access-9hb2q\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:23:48.853108 master-0 kubenswrapper[28504]: I0318 13:23:48.853084 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:23:48.853216 master-0 kubenswrapper[28504]: I0318 13:23:48.853131 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-client-ca\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:23:48.853216 master-0 kubenswrapper[28504]: I0318 13:23:48.853154 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-utilities\") pod \"redhat-operators-459lq\" (UID: \"35d8f08f-4c57-44e0-8e8f-3969287e2a5a\") " pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:23:48.853216 master-0 kubenswrapper[28504]: I0318 13:23:48.853198 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d0b174-33e4-46ee-863b-b5cc2a271b85-serving-cert\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:23:48.853304 master-0 kubenswrapper[28504]: I0318 13:23:48.853223 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e0fa133-60e7-47d0-996e-7e85aef2a218-catalog-content\") pod \"redhat-marketplace-p546b\" (UID: \"2e0fa133-60e7-47d0-996e-7e85aef2a218\") " pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:23:48.853304 master-0 kubenswrapper[28504]: I0318 13:23:48.853246 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2c4572e-0b38-4db1-96e5-6a35e29048e7-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:23:48.853304 master-0 kubenswrapper[28504]: I0318 13:23:48.853291 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w454\" (UniqueName: \"kubernetes.io/projected/933a37fd-d76a-4f60-8dd8-301fb73daf42-kube-api-access-5w454\") pod \"control-plane-machine-set-operator-6f97756bc8-bjpp5\" (UID: \"933a37fd-d76a-4f60-8dd8-301fb73daf42\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5" Mar 18 13:23:48.853390 master-0 kubenswrapper[28504]: I0318 13:23:48.853317 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw64j\" (UniqueName: \"kubernetes.io/projected/1ad580a2-7f58-4d66-adad-0a53d9777655-kube-api-access-cw64j\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:23:48.853390 master-0 kubenswrapper[28504]: I0318 13:23:48.853363 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d27hr\" (UniqueName: \"kubernetes.io/projected/2385db6b-4286-4839-822c-aa9c52290172-kube-api-access-d27hr\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:23:48.853445 master-0 kubenswrapper[28504]: I0318 13:23:48.853388 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-audit-policies\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:48.853445 master-0 kubenswrapper[28504]: I0318 13:23:48.853411 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20dc979a-732b-43b5-acc2-118e4c350470-ovn-node-metrics-cert\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.853501 master-0 kubenswrapper[28504]: I0318 13:23:48.853456 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e4d0b174-33e4-46ee-863b-b5cc2a271b85-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:23:48.853501 master-0 kubenswrapper[28504]: I0318 13:23:48.853485 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.853562 master-0 kubenswrapper[28504]: I0318 13:23:48.853530 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-config\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:23:48.853593 master-0 kubenswrapper[28504]: I0318 13:23:48.853560 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq596\" (UniqueName: \"kubernetes.io/projected/734f9f10-5bde-44d5-a831-021b93fd667d-kube-api-access-mq596\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:23:48.853623 master-0 kubenswrapper[28504]: I0318 13:23:48.853607 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:23:48.853656 master-0 kubenswrapper[28504]: I0318 13:23:48.853634 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:23:48.853701 master-0 kubenswrapper[28504]: I0318 13:23:48.853679 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-serving-cert\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:23:48.853733 master-0 kubenswrapper[28504]: I0318 13:23:48.853710 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-node-bootstrap-token\") pod \"machine-config-server-4f5s4\" (UID: \"02879f34-7062-4f07-9a5a-f965103d9182\") " pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:23:48.853772 master-0 kubenswrapper[28504]: I0318 13:23:48.853734 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.853804 master-0 kubenswrapper[28504]: I0318 13:23:48.853784 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-multus-daemon-config\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.853839 master-0 kubenswrapper[28504]: I0318 13:23:48.853806 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8f59a12b-d690-44c5-972c-fb4b0b5819f1-hosts-file\") pod \"node-resolver-slqms\" (UID: \"8f59a12b-d690-44c5-972c-fb4b0b5819f1\") " pod="openshift-dns/node-resolver-slqms" Mar 18 13:23:48.853867 master-0 kubenswrapper[28504]: I0318 13:23:48.853830 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rccw\" (UniqueName: \"kubernetes.io/projected/2e0fa133-60e7-47d0-996e-7e85aef2a218-kube-api-access-7rccw\") pod \"redhat-marketplace-p546b\" (UID: \"2e0fa133-60e7-47d0-996e-7e85aef2a218\") " pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:23:48.853897 master-0 kubenswrapper[28504]: I0318 13:23:48.853876 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:23:48.853928 master-0 kubenswrapper[28504]: I0318 13:23:48.853900 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-os-release\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.853994 master-0 kubenswrapper[28504]: I0318 13:23:48.853924 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-config\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:23:48.853994 master-0 kubenswrapper[28504]: I0318 13:23:48.853981 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a951627-c032-4846-821c-c4bcbf4a91b9-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-92zqc\" (UID: \"7a951627-c032-4846-821c-c4bcbf4a91b9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc" Mar 18 13:23:48.854061 master-0 kubenswrapper[28504]: I0318 13:23:48.854005 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c106be-27ea-4849-b365-eff6d25f5e71-proxy-tls\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:23:48.854061 master-0 kubenswrapper[28504]: I0318 13:23:48.854029 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9tzl\" (UniqueName: \"kubernetes.io/projected/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-kube-api-access-z9tzl\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:23:48.854061 master-0 kubenswrapper[28504]: I0318 13:23:48.854052 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-env-overrides\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854078 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghzrb\" (UniqueName: \"kubernetes.io/projected/47f82c03-65d1-4a6c-ba09-8a00ae778009-kube-api-access-ghzrb\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854100 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mlkj\" (UniqueName: \"kubernetes.io/projected/1bf0ea4e-8b08-488f-b252-39580f46b756-kube-api-access-4mlkj\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854130 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmmhd\" (UniqueName: \"kubernetes.io/projected/3a039fc2-b0af-4b2c-a884-1c274c08064d-kube-api-access-pmmhd\") pod \"service-ca-79bc6b8d76-855bx\" (UID: \"3a039fc2-b0af-4b2c-a884-1c274c08064d\") " pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854153 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5475b\" (UniqueName: \"kubernetes.io/projected/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-kube-api-access-5475b\") pod \"cluster-autoscaler-operator-866dc4744-q8vxr\" (UID: \"bd033b5b-af07-4e69-9a5c-46f7c9bde95a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854202 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-system-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854229 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-trusted-ca-bundle\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854251 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-tuned\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854275 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vljm6\" (UniqueName: \"kubernetes.io/projected/d2cf9274-25d2-4576-bbef-1d416dfff0a9-kube-api-access-vljm6\") pod \"certified-operators-d7pj2\" (UID: \"d2cf9274-25d2-4576-bbef-1d416dfff0a9\") " pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854303 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkdqs\" (UniqueName: \"kubernetes.io/projected/36db10b8-33a2-4b54-85e2-9809eb6bc37d-kube-api-access-bkdqs\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854324 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-audit\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854346 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/234a5a6c-3790-49d0-b1e7-86f81048d96a-cache\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854368 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b29z\" (UniqueName: \"kubernetes.io/projected/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-kube-api-access-7b29z\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854389 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-lib-modules\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854413 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvxs4\" (UniqueName: \"kubernetes.io/projected/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-kube-api-access-qvxs4\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854436 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854464 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854488 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854508 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-socket-dir-parent\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.854559 master-0 kubenswrapper[28504]: I0318 13:23:48.854535 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.854586 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e4d0b174-33e4-46ee-863b-b5cc2a271b85-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.854613 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.854636 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93ea3c78-dede-468f-89a5-551133f794c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.854656 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855469 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysconfig\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855501 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcm8d\" (UniqueName: \"kubernetes.io/projected/0213214b-693b-411b-8254-48d7826011eb-kube-api-access-xcm8d\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855532 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-hostroot\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855560 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-etcd-client\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855585 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwsns\" (UniqueName: \"kubernetes.io/projected/46ae7b31-c91c-477e-a04a-a3a8541747be-kube-api-access-zwsns\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855609 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-service-ca-bundle\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855633 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-etc-kubernetes\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855676 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqxgz\" (UniqueName: \"kubernetes.io/projected/ebe459df-4be3-4a73-a061-5d2c637f57be-kube-api-access-fqxgz\") pod \"network-check-source-b4bf74f6-qnwtb\" (UID: \"ebe459df-4be3-4a73-a061-5d2c637f57be\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-qnwtb" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855707 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-cnibin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855734 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-cni-binary-copy\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855755 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-multus-certs\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855777 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855800 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855824 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855848 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0213214b-693b-411b-8254-48d7826011eb-serving-cert\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855876 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/92e396cd-a0d9-4b6b-9d82-add1ce2a8712-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-8qhwm\" (UID: \"92e396cd-a0d9-4b6b-9d82-add1ce2a8712\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855898 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855919 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-systemd-units\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855958 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-host-slash\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.855984 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-bound-sa-token\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.856005 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.856027 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ea3c78-dede-468f-89a5-551133f794c5-config\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.856742 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-images\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.856854 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.857046 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.856760 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.857542 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ea3c78-dede-468f-89a5-551133f794c5-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.857584 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-config\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.857705 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2cf9274-25d2-4576-bbef-1d416dfff0a9-catalog-content\") pod \"certified-operators-d7pj2\" (UID: \"d2cf9274-25d2-4576-bbef-1d416dfff0a9\") " pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.857830 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a951627-c032-4846-821c-c4bcbf4a91b9-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-92zqc\" (UID: \"7a951627-c032-4846-821c-c4bcbf4a91b9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.857953 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/16a930da-d793-486f-bcef-cf042d3c427d-operand-assets\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.858174 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2c4572e-0b38-4db1-96e5-6a35e29048e7-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.858245 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-env-overrides\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.858668 master-0 kubenswrapper[28504]: I0318 13:23:48.858581 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/234a5a6c-3790-49d0-b1e7-86f81048d96a-cache\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:23:48.860199 master-0 kubenswrapper[28504]: I0318 13:23:48.858828 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-serving-cert\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:23:48.860199 master-0 kubenswrapper[28504]: I0318 13:23:48.858895 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-utilities\") pod \"redhat-operators-459lq\" (UID: \"35d8f08f-4c57-44e0-8e8f-3969287e2a5a\") " pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:23:48.860199 master-0 kubenswrapper[28504]: I0318 13:23:48.858894 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-trusted-ca-bundle\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:48.860199 master-0 kubenswrapper[28504]: I0318 13:23:48.859116 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e0fa133-60e7-47d0-996e-7e85aef2a218-utilities\") pod \"redhat-marketplace-p546b\" (UID: \"2e0fa133-60e7-47d0-996e-7e85aef2a218\") " pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:23:48.860199 master-0 kubenswrapper[28504]: I0318 13:23:48.859137 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-tuned\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.860199 master-0 kubenswrapper[28504]: I0318 13:23:48.859357 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.860199 master-0 kubenswrapper[28504]: I0318 13:23:48.859567 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-multus-daemon-config\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.860199 master-0 kubenswrapper[28504]: I0318 13:23:48.859603 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-client-ca\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:23:48.860199 master-0 kubenswrapper[28504]: I0318 13:23:48.859767 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3a039fc2-b0af-4b2c-a884-1c274c08064d-signing-cabundle\") pod \"service-ca-79bc6b8d76-855bx\" (UID: \"3a039fc2-b0af-4b2c-a884-1c274c08064d\") " pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" Mar 18 13:23:48.860199 master-0 kubenswrapper[28504]: I0318 13:23:48.859781 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-encryption-config\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:48.860199 master-0 kubenswrapper[28504]: I0318 13:23:48.859821 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e0fa133-60e7-47d0-996e-7e85aef2a218-catalog-content\") pod \"redhat-marketplace-p546b\" (UID: \"2e0fa133-60e7-47d0-996e-7e85aef2a218\") " pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:23:48.860199 master-0 kubenswrapper[28504]: I0318 13:23:48.859865 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/375d5112-d2be-47cf-bee1-82614ba71ed8-tmpfs\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:23:48.860199 master-0 kubenswrapper[28504]: I0318 13:23:48.859908 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-audit-policies\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:48.860199 master-0 kubenswrapper[28504]: I0318 13:23:48.859984 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ea3c78-dede-468f-89a5-551133f794c5-config\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:23:48.860199 master-0 kubenswrapper[28504]: I0318 13:23:48.860118 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e47f97eb0a0cc5aac7e96e57325228c9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:23:48.860670 master-0 kubenswrapper[28504]: I0318 13:23:48.860272 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-service-ca-bundle\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:23:48.860670 master-0 kubenswrapper[28504]: I0318 13:23:48.860306 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4671673d-afa0-481f-b3a2-2c2b9441b6ce-metrics-tls\") pod \"dns-default-wl929\" (UID: \"4671673d-afa0-481f-b3a2-2c2b9441b6ce\") " pod="openshift-dns/dns-default-wl929" Mar 18 13:23:48.860670 master-0 kubenswrapper[28504]: I0318 13:23:48.860469 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4086d06f-d50e-4632-9da7-508909429eef-cni-binary-copy\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.860670 master-0 kubenswrapper[28504]: I0318 13:23:48.860590 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-etcd-client\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:48.860773 master-0 kubenswrapper[28504]: I0318 13:23:48.860633 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a0944d2-d99a-42eb-81f5-a212b750b8f4-metrics-tls\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:23:48.860773 master-0 kubenswrapper[28504]: I0318 13:23:48.860713 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb471665-2b07-48df-9881-3fb663390b23-config\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:23:48.860773 master-0 kubenswrapper[28504]: I0318 13:23:48.860746 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-config\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.860857 master-0 kubenswrapper[28504]: I0318 13:23:48.860800 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:23:48.860900 master-0 kubenswrapper[28504]: I0318 13:23:48.860847 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:48.860992 master-0 kubenswrapper[28504]: I0318 13:23:48.860965 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:48.861556 master-0 kubenswrapper[28504]: I0318 13:23:48.860651 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83a4f641-d28f-42aa-a228-f6086d720fe4-config\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:23:48.861556 master-0 kubenswrapper[28504]: I0318 13:23:48.860641 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-default-certificate\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:23:48.861638 master-0 kubenswrapper[28504]: I0318 13:23:48.861110 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ad580a2-7f58-4d66-adad-0a53d9777655-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:23:48.861638 master-0 kubenswrapper[28504]: I0318 13:23:48.861169 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/92e396cd-a0d9-4b6b-9d82-add1ce2a8712-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-8qhwm\" (UID: \"92e396cd-a0d9-4b6b-9d82-add1ce2a8712\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm" Mar 18 13:23:48.861638 master-0 kubenswrapper[28504]: I0318 13:23:48.861253 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-config\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:23:48.861638 master-0 kubenswrapper[28504]: I0318 13:23:48.861277 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-config\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.861638 master-0 kubenswrapper[28504]: I0318 13:23:48.861612 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:23:48.861773 master-0 kubenswrapper[28504]: I0318 13:23:48.861639 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/234a5a6c-3790-49d0-b1e7-86f81048d96a-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:23:48.861773 master-0 kubenswrapper[28504]: I0318 13:23:48.861394 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2b92a53-0b61-4e1d-a306-f9a498e48b38-metrics-tls\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:23:48.861773 master-0 kubenswrapper[28504]: I0318 13:23:48.861494 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:23:48.861773 master-0 kubenswrapper[28504]: I0318 13:23:48.861496 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-config\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:23:48.861773 master-0 kubenswrapper[28504]: I0318 13:23:48.861330 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0213214b-693b-411b-8254-48d7826011eb-serving-cert\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:23:48.861773 master-0 kubenswrapper[28504]: I0318 13:23:48.861371 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.861773 master-0 kubenswrapper[28504]: I0318 13:23:48.861694 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/234a5a6c-3790-49d0-b1e7-86f81048d96a-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:23:48.861773 master-0 kubenswrapper[28504]: I0318 13:23:48.861736 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/234a5a6c-3790-49d0-b1e7-86f81048d96a-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.861819 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-run\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.861902 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.861975 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317a89ea-e9dd-4167-8568-bb36e2431015-utilities\") pod \"community-operators-nhwvw\" (UID: \"317a89ea-e9dd-4167-8568-bb36e2431015\") " pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862080 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbztv\" (UniqueName: \"kubernetes.io/projected/c074751c-6b3c-44df-aca5-42fa69662779-kube-api-access-bbztv\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862120 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317a89ea-e9dd-4167-8568-bb36e2431015-utilities\") pod \"community-operators-nhwvw\" (UID: \"317a89ea-e9dd-4167-8568-bb36e2431015\") " pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862162 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da6a763d-2777-40c4-ae1f-c77ced406ea2-metrics-tls\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862304 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3a039fc2-b0af-4b2c-a884-1c274c08064d-signing-key\") pod \"service-ca-79bc6b8d76-855bx\" (UID: \"3a039fc2-b0af-4b2c-a884-1c274c08064d\") " pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862328 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317a89ea-e9dd-4167-8568-bb36e2431015-catalog-content\") pod \"community-operators-nhwvw\" (UID: \"317a89ea-e9dd-4167-8568-bb36e2431015\") " pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862371 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e309570-09d0-412a-a74b-c5397d048a30-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-jjdvw\" (UID: \"7e309570-09d0-412a-a74b-c5397d048a30\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862390 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-proxy-ca-bundles\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862406 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-certs\") pod \"machine-config-server-4f5s4\" (UID: \"02879f34-7062-4f07-9a5a-f965103d9182\") " pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862423 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlm4c\" (UniqueName: \"kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-kube-api-access-xlm4c\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862447 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3a039fc2-b0af-4b2c-a884-1c274c08064d-signing-key\") pod \"service-ca-79bc6b8d76-855bx\" (UID: \"3a039fc2-b0af-4b2c-a884-1c274c08064d\") " pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862504 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317a89ea-e9dd-4167-8568-bb36e2431015-catalog-content\") pod \"community-operators-nhwvw\" (UID: \"317a89ea-e9dd-4167-8568-bb36e2431015\") " pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862511 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlbm6\" (UniqueName: \"kubernetes.io/projected/b41c9132-92ef-429d-bdd5-9bdb024e04fc-kube-api-access-wlbm6\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862546 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862570 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-var-lib-kubelet\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862593 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862618 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862626 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862661 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6bfw\" (UniqueName: \"kubernetes.io/projected/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-kube-api-access-w6bfw\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862682 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f8xk\" (UniqueName: \"kubernetes.io/projected/cb471665-2b07-48df-9881-3fb663390b23-kube-api-access-6f8xk\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862699 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862719 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-cnibin\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862761 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/17adbc1a-f29c-4278-b29a-0cc3879b753f-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-qpp2s\" (UID: \"17adbc1a-f29c-4278-b29a-0cc3879b753f\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862786 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862794 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cert\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862797 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862810 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862849 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-catalog-content\") pod \"redhat-operators-459lq\" (UID: \"35d8f08f-4c57-44e0-8e8f-3969287e2a5a\") " pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862869 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-kubelet\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862889 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862906 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysctl-d\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.862954 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.863020 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-config\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.863024 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/a01c92f5-7938-437d-8262-11598bd8023c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.863048 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7jz6\" (UniqueName: \"kubernetes.io/projected/4671673d-afa0-481f-b3a2-2c2b9441b6ce-kube-api-access-d7jz6\") pod \"dns-default-wl929\" (UID: \"4671673d-afa0-481f-b3a2-2c2b9441b6ce\") " pod="openshift-dns/dns-default-wl929" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.863069 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.863070 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-etcd-client\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.863110 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp5xj\" (UniqueName: \"kubernetes.io/projected/234a5a6c-3790-49d0-b1e7-86f81048d96a-kube-api-access-pp5xj\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.863136 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.863160 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c106be-27ea-4849-b365-eff6d25f5e71-mcd-auth-proxy-config\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.863179 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f2b92a53-0b61-4e1d-a306-f9a498e48b38-trusted-ca\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.863202 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4dcj\" (UniqueName: \"kubernetes.io/projected/375d5112-d2be-47cf-bee1-82614ba71ed8-kube-api-access-d4dcj\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.863221 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9jhr\" (UniqueName: \"kubernetes.io/projected/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-kube-api-access-w9jhr\") pod \"cloud-credential-operator-744f9dbf77-9nw6w\" (UID: \"7fa6920b-f7d9-4758-bba9-356a2c8b1b67\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.863232 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.863212 master-0 kubenswrapper[28504]: I0318 13:23:48.863242 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sqzx\" (UniqueName: \"kubernetes.io/projected/330df925-8429-4b96-9bfe-caa017c21afa-kube-api-access-2sqzx\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863270 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/375d5112-d2be-47cf-bee1-82614ba71ed8-apiservice-cert\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863299 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbv4l\" (UniqueName: \"kubernetes.io/projected/02879f34-7062-4f07-9a5a-f965103d9182-kube-api-access-jbv4l\") pod \"machine-config-server-4f5s4\" (UID: \"02879f34-7062-4f07-9a5a-f965103d9182\") " pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863340 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-stats-auth\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863362 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-ovnkube-identity-cm\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863380 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/baeb6380-95e4-4e10-9798-e1e22f20bade-cache\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863398 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-serving-cert\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863417 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-kubelet\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863379 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-catalog-content\") pod \"redhat-operators-459lq\" (UID: \"35d8f08f-4c57-44e0-8e8f-3969287e2a5a\") " pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863544 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbksj\" (UniqueName: \"kubernetes.io/projected/9ca94153-9d1a-4b0a-a3eb-556e85f2e875-kube-api-access-hbksj\") pod \"migrator-8487694857-vf6mv\" (UID: \"9ca94153-9d1a-4b0a-a3eb-556e85f2e875\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-vf6mv" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863565 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4d0b174-33e4-46ee-863b-b5cc2a271b85-kube-api-access\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863582 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-slash\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863599 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-netd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863618 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863637 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-env-overrides\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863655 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzblt\" (UniqueName: \"kubernetes.io/projected/35925474-e3fe-4cff-aad6-d853816618c7-kube-api-access-dzblt\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863675 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-serving-cert\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863731 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-images\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863751 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863770 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4djxt\" (UniqueName: \"kubernetes.io/projected/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-kube-api-access-4djxt\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863789 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/734f9f10-5bde-44d5-a831-021b93fd667d-machine-approver-tls\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863808 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863825 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-metrics-certs\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863842 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/17adbc1a-f29c-4278-b29a-0cc3879b753f-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-qpp2s\" (UID: \"17adbc1a-f29c-4278-b29a-0cc3879b753f\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863865 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863882 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-log-socket\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863902 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kpz5\" (UniqueName: \"kubernetes.io/projected/8f59a12b-d690-44c5-972c-fb4b0b5819f1-kube-api-access-8kpz5\") pod \"node-resolver-slqms\" (UID: \"8f59a12b-d690-44c5-972c-fb4b0b5819f1\") " pod="openshift-dns/node-resolver-slqms" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863919 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-env-overrides\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863927 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f2b92a53-0b61-4e1d-a306-f9a498e48b38-trusted-ca\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.863951 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mthwt\" (UniqueName: \"kubernetes.io/projected/a5a93d05-3c8e-4666-9a55-d8f9e902db78-kube-api-access-mthwt\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864088 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864119 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ce8e99d-7b02-4bf4-a438-adde851918cb-serving-cert\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864136 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864153 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cfa12a-0711-4fba-8859-73a3f8f250a9-serving-cert\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864169 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2cf9274-25d2-4576-bbef-1d416dfff0a9-utilities\") pod \"certified-operators-d7pj2\" (UID: \"d2cf9274-25d2-4576-bbef-1d416dfff0a9\") " pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864189 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-q8vxr\" (UID: \"bd033b5b-af07-4e69-9a5c-46f7c9bde95a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864201 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-stats-auth\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864206 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864231 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864253 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864272 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-client\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864289 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-ovn\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864365 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864434 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-config\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864456 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-config\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864475 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6sr4\" (UniqueName: \"kubernetes.io/projected/17adbc1a-f29c-4278-b29a-0cc3879b753f-kube-api-access-v6sr4\") pod \"machine-config-controller-b4f87c5b9-qpp2s\" (UID: \"17adbc1a-f29c-4278-b29a-0cc3879b753f\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864493 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-etcd-serving-ca\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864510 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864527 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad580a2-7f58-4d66-adad-0a53d9777655-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864545 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864563 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv9m7\" (UniqueName: \"kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7\") pod \"network-check-target-zlgkc\" (UID: \"2cad2401-dab1-49f7-870e-a742ebfe323f\") " pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864577 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cfa12a-0711-4fba-8859-73a3f8f250a9-serving-cert\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864706 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-ca\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864788 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/baeb6380-95e4-4e10-9798-e1e22f20bade-cache\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864847 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e691486-8540-4b79-8eed-b0fb829071db-metrics-certs\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864869 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2cf9274-25d2-4576-bbef-1d416dfff0a9-utilities\") pod \"certified-operators-d7pj2\" (UID: \"d2cf9274-25d2-4576-bbef-1d416dfff0a9\") " pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864580 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-ca\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864919 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/baeb6380-95e4-4e10-9798-e1e22f20bade-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864956 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-systemd\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864978 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-bin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864987 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-env-overrides\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.864994 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-bin\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865102 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k254v\" (UniqueName: \"kubernetes.io/projected/eb8907fd-35dd-452a-8032-f2f95a6e553a-kube-api-access-k254v\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865124 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb471665-2b07-48df-9881-3fb663390b23-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865143 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865161 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c4572e-0b38-4db1-96e5-6a35e29048e7-config\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865166 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/eb8907fd-35dd-452a-8032-f2f95a6e553a-ovnkube-identity-cm\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865179 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865204 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865203 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865225 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865230 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-9nw6w\" (UID: \"7fa6920b-f7d9-4758-bba9-356a2c8b1b67\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865266 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4671673d-afa0-481f-b3a2-2c2b9441b6ce-config-volume\") pod \"dns-default-wl929\" (UID: \"4671673d-afa0-481f-b3a2-2c2b9441b6ce\") " pod="openshift-dns/dns-default-wl929" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865291 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b41c9132-92ef-429d-bdd5-9bdb024e04fc-node-pullsecrets\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865309 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-multus\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865329 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865346 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865363 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysctl-conf\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865385 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnvfd\" (UniqueName: \"kubernetes.io/projected/20dc979a-732b-43b5-acc2-118e4c350470-kube-api-access-wnvfd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865405 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/16a930da-d793-486f-bcef-cf042d3c427d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865447 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/46ae7b31-c91c-477e-a04a-a3a8541747be-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865480 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e4d0b174-33e4-46ee-863b-b5cc2a271b85-service-ca\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865520 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865541 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-kubernetes\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865560 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-systemd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865577 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865585 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf0ea4e-8b08-488f-b252-39580f46b756-etcd-client\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865596 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0213214b-693b-411b-8254-48d7826011eb-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865615 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-trusted-ca-bundle\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865634 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865653 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc69w\" (UniqueName: \"kubernetes.io/projected/a01c92f5-7938-437d-8262-11598bd8023c-kube-api-access-qc69w\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865678 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c074751c-6b3c-44df-aca5-42fa69662779-snapshots\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865698 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb8907fd-35dd-452a-8032-f2f95a6e553a-webhook-cert\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865717 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dvd5\" (UniqueName: \"kubernetes.io/projected/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-kube-api-access-5dvd5\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865734 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865751 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e47f97eb0a0cc5aac7e96e57325228c9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865757 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a01c92f5-7938-437d-8262-11598bd8023c-config\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865770 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0f16e797-a619-46a8-948a-9fdfc8a9891f-tmp\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865788 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-metrics-certs\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865794 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5a93d05-3c8e-4666-9a55-d8f9e902db78-serving-cert\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865842 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.865921 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.866025 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-etcd-serving-ca\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.866054 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4bc77989-ecfc-4500-92a0-18c2b3b78408-env-overrides\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.866148 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.866144 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.866162 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:48.866087 master-0 kubenswrapper[28504]: I0318 13:23:48.866204 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.866360 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.866399 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.866584 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c4572e-0b38-4db1-96e5-6a35e29048e7-config\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.866616 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.866695 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.865634 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-config\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.866809 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.866888 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4671673d-afa0-481f-b3a2-2c2b9441b6ce-config-volume\") pod \"dns-default-wl929\" (UID: \"4671673d-afa0-481f-b3a2-2c2b9441b6ce\") " pod="openshift-dns/dns-default-wl929" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.866962 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0213214b-693b-411b-8254-48d7826011eb-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867113 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c074751c-6b3c-44df-aca5-42fa69662779-snapshots\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867273 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e47f97eb0a0cc5aac7e96e57325228c9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867333 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb8907fd-35dd-452a-8032-f2f95a6e553a-webhook-cert\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867370 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867444 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-serving-cert\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867465 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0f16e797-a619-46a8-948a-9fdfc8a9891f-tmp\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867491 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b41c9132-92ef-429d-bdd5-9bdb024e04fc-audit-dir\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867530 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83a4f641-d28f-42aa-a228-f6086d720fe4-serving-cert\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867577 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867569 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/933a37fd-d76a-4f60-8dd8-301fb73daf42-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-bjpp5\" (UID: \"933a37fd-d76a-4f60-8dd8-301fb73daf42\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867614 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-netns\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867629 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-sys\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867671 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6b9b\" (UniqueName: \"kubernetes.io/projected/0f16e797-a619-46a8-948a-9fdfc8a9891f-kube-api-access-q6b9b\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867702 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwfnk\" (UniqueName: \"kubernetes.io/projected/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-kube-api-access-qwfnk\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867725 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhzcj\" (UniqueName: \"kubernetes.io/projected/65cfa12a-0711-4fba-8859-73a3f8f250a9-kube-api-access-xhzcj\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867745 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867771 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxn4v\" (UniqueName: \"kubernetes.io/projected/7a951627-c032-4846-821c-c4bcbf4a91b9-kube-api-access-wxn4v\") pod \"cluster-storage-operator-7d87854d6-92zqc\" (UID: \"7a951627-c032-4846-821c-c4bcbf4a91b9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867825 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83a4f641-d28f-42aa-a228-f6086d720fe4-serving-cert\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.867851 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/933a37fd-d76a-4f60-8dd8-301fb73daf42-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-bjpp5\" (UID: \"933a37fd-d76a-4f60-8dd8-301fb73daf42\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5" Mar 18 13:23:48.869779 master-0 kubenswrapper[28504]: I0318 13:23:48.868070 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:23:48.882208 master-0 kubenswrapper[28504]: I0318 13:23:48.882028 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 13:23:48.888457 master-0 kubenswrapper[28504]: I0318 13:23:48.888366 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ce8e99d-7b02-4bf4-a438-adde851918cb-serving-cert\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:23:48.903993 master-0 kubenswrapper[28504]: I0318 13:23:48.902258 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 13:23:48.907369 master-0 kubenswrapper[28504]: I0318 13:23:48.907330 28504 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 13:23:48.921387 master-0 kubenswrapper[28504]: I0318 13:23:48.921320 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 13:23:48.923840 master-0 kubenswrapper[28504]: I0318 13:23:48.923802 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36db10b8-33a2-4b54-85e2-9809eb6bc37d-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:23:48.942405 master-0 kubenswrapper[28504]: I0318 13:23:48.942329 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 13:23:48.962270 master-0 kubenswrapper[28504]: I0318 13:23:48.961097 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 13:23:48.968785 master-0 kubenswrapper[28504]: I0318 13:23:48.968690 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-kubelet\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.969102 master-0 kubenswrapper[28504]: I0318 13:23:48.968825 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-kubelet\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.969102 master-0 kubenswrapper[28504]: I0318 13:23:48.968915 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-tls\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:48.969102 master-0 kubenswrapper[28504]: I0318 13:23:48.968960 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-slash\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.969102 master-0 kubenswrapper[28504]: I0318 13:23:48.968984 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-netd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.969102 master-0 kubenswrapper[28504]: I0318 13:23:48.969024 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:48.969102 master-0 kubenswrapper[28504]: I0318 13:23:48.969089 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-slash\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.969389 master-0 kubenswrapper[28504]: I0318 13:23:48.969174 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-log-socket\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.969389 master-0 kubenswrapper[28504]: I0318 13:23:48.969216 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:48.969389 master-0 kubenswrapper[28504]: I0318 13:23:48.969242 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6ed4f640-d481-4e7a-92eb-f0eda82e138c-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:48.969389 master-0 kubenswrapper[28504]: I0318 13:23:48.969264 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:48.969389 master-0 kubenswrapper[28504]: I0318 13:23:48.969307 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-log-socket\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.969389 master-0 kubenswrapper[28504]: I0318 13:23:48.969315 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twczm\" (UniqueName: \"kubernetes.io/projected/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-kube-api-access-twczm\") pod \"multus-admission-controller-58c9f8fc64-bnrjt\" (UID: \"bc9af4af-fb39-4a51-83ae-dab3f1d159f2\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" Mar 18 13:23:48.969389 master-0 kubenswrapper[28504]: I0318 13:23:48.969378 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-ovn\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.969660 master-0 kubenswrapper[28504]: I0318 13:23:48.969419 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6ed4f640-d481-4e7a-92eb-f0eda82e138c-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:48.969660 master-0 kubenswrapper[28504]: I0318 13:23:48.969452 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-ovn\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.969660 master-0 kubenswrapper[28504]: I0318 13:23:48.969495 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/baeb6380-95e4-4e10-9798-e1e22f20bade-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:23:48.969660 master-0 kubenswrapper[28504]: I0318 13:23:48.969545 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/baeb6380-95e4-4e10-9798-e1e22f20bade-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:23:48.969660 master-0 kubenswrapper[28504]: I0318 13:23:48.969561 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-netd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.969660 master-0 kubenswrapper[28504]: I0318 13:23:48.969587 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:48.969660 master-0 kubenswrapper[28504]: I0318 13:23:48.969650 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-systemd\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.969920 master-0 kubenswrapper[28504]: I0318 13:23:48.969669 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-bin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.969920 master-0 kubenswrapper[28504]: I0318 13:23:48.969688 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99mks\" (UniqueName: \"kubernetes.io/projected/5a715e53-1874-4993-93d1-504c3470a6f5-kube-api-access-99mks\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:23:48.969920 master-0 kubenswrapper[28504]: I0318 13:23:48.969706 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b79758b7-9129-496c-abec-80d455648454-audit-log\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:48.969920 master-0 kubenswrapper[28504]: I0318 13:23:48.969723 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-bin\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.969920 master-0 kubenswrapper[28504]: I0318 13:23:48.969740 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-bin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.969920 master-0 kubenswrapper[28504]: I0318 13:23:48.969761 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:48.969920 master-0 kubenswrapper[28504]: I0318 13:23:48.969778 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-systemd\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.969920 master-0 kubenswrapper[28504]: I0318 13:23:48.969786 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b41c9132-92ef-429d-bdd5-9bdb024e04fc-node-pullsecrets\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:48.969920 master-0 kubenswrapper[28504]: I0318 13:23:48.969817 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b41c9132-92ef-429d-bdd5-9bdb024e04fc-node-pullsecrets\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:48.969920 master-0 kubenswrapper[28504]: I0318 13:23:48.969834 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-cni-bin\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.969920 master-0 kubenswrapper[28504]: I0318 13:23:48.969917 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-multus\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.969968 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysctl-conf\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.969991 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-cni-multus\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.969970 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b79758b7-9129-496c-abec-80d455648454-audit-log\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.970032 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5a715e53-1874-4993-93d1-504c3470a6f5-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.970064 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-kubernetes\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.970093 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-systemd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.970116 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.970154 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysctl-conf\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.970182 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.970188 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b41c9132-92ef-429d-bdd5-9bdb024e04fc-audit-dir\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.970222 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-sys\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.970238 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-kubernetes\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.970245 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-netns\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.970265 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-sys\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.970265 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-systemd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.970295 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-netns\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.970299 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b41c9132-92ef-429d-bdd5-9bdb024e04fc-audit-dir\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.970340 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-sys\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.970340 master-0 kubenswrapper[28504]: I0318 13:23:48.970351 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhmmv\" (UniqueName: \"kubernetes.io/projected/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-api-access-xhmmv\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970375 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-node-log\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970447 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/baeb6380-95e4-4e10-9798-e1e22f20bade-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970487 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-node-log\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970520 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-audit-dir\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970537 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-modprobe-d\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970557 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-netns\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970562 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/baeb6380-95e4-4e10-9798-e1e22f20bade-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970573 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-etc-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970606 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-run-netns\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970613 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-audit-dir\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970680 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-modprobe-d\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970705 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-etc-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970703 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-k8s-cni-cncf-io\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970727 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-k8s-cni-cncf-io\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970730 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970786 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-host\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970822 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-os-release\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970859 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-var-lib-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.970928 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-conf-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.971029 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-system-cni-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.971083 master-0 kubenswrapper[28504]: I0318 13:23:48.971064 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971108 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8a0944d2-d99a-42eb-81f5-a212b750b8f4-host-etc-kube\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971125 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-host\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971170 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971176 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8a0944d2-d99a-42eb-81f5-a212b750b8f4-host-etc-kube\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971198 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971229 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/234a5a6c-3790-49d0-b1e7-86f81048d96a-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971239 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-os-release\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971251 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f3c106be-27ea-4849-b365-eff6d25f5e71-rootfs\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971271 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-var-lib-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971301 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f3c106be-27ea-4849-b365-eff6d25f5e71-rootfs\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971309 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-conf-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971388 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-system-cni-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971391 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/234a5a6c-3790-49d0-b1e7-86f81048d96a-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971494 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e4d0b174-33e4-46ee-863b-b5cc2a271b85-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971561 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971588 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8f59a12b-d690-44c5-972c-fb4b0b5819f1-hosts-file\") pod \"node-resolver-slqms\" (UID: \"8f59a12b-d690-44c5-972c-fb4b0b5819f1\") " pod="openshift-dns/node-resolver-slqms" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971629 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-os-release\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971653 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-metrics-server-audit-profiles\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971669 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-config\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971750 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8f59a12b-d690-44c5-972c-fb4b0b5819f1-hosts-file\") pod \"node-resolver-slqms\" (UID: \"8f59a12b-d690-44c5-972c-fb4b0b5819f1\") " pod="openshift-dns/node-resolver-slqms" Mar 18 13:23:48.971758 master-0 kubenswrapper[28504]: I0318 13:23:48.971758 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-os-release\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.971784 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e4d0b174-33e4-46ee-863b-b5cc2a271b85-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.971815 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b856d226-a137-4954-82c5-5929d579387a-node-exporter-textfile\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.971830 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.971841 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2msq\" (UniqueName: \"kubernetes.io/projected/b856d226-a137-4954-82c5-5929d579387a-kube-api-access-n2msq\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.971912 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b856d226-a137-4954-82c5-5929d579387a-node-exporter-textfile\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.971918 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-system-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972017 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-lib-modules\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972048 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-system-cni-dir\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972063 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972123 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972141 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-lib-modules\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972157 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-socket-dir-parent\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972194 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-bnrjt\" (UID: \"bc9af4af-fb39-4a51-83ae-dab3f1d159f2\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972224 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e4d0b174-33e4-46ee-863b-b5cc2a271b85-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972264 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e4d0b174-33e4-46ee-863b-b5cc2a271b85-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972266 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-multus-socket-dir-parent\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972285 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysconfig\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972327 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysconfig\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972364 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-hostroot\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972389 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s9rk\" (UniqueName: \"kubernetes.io/projected/3c0d0048-6d96-459c-8742-2f092af44a6a-kube-api-access-2s9rk\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972422 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-etc-kubernetes\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972431 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-hostroot\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972445 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-node-exporter-wtmp\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972479 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-cnibin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972502 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-multus-certs\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972523 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972540 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972547 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-etc-kubernetes\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972560 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-systemd-units\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972575 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-run-multus-certs\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972584 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-host-slash\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972616 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-cnibin\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972613 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972643 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-run-openvswitch\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.972624 master-0 kubenswrapper[28504]: I0318 13:23:48.972651 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.972694 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.972757 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/234a5a6c-3790-49d0-b1e7-86f81048d96a-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.972800 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.972799 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.972834 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpdw6\" (UniqueName: \"kubernetes.io/projected/b79758b7-9129-496c-abec-80d455648454-kube-api-access-lpdw6\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.972854 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3c0d0048-6d96-459c-8742-2f092af44a6a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.972871 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-run\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.972869 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/234a5a6c-3790-49d0-b1e7-86f81048d96a-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.972895 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b856d226-a137-4954-82c5-5929d579387a-metrics-client-ca\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.972965 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20dc979a-732b-43b5-acc2-118e4c350470-systemd-units\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.973078 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-var-lib-kubelet\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.973136 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-var-lib-kubelet\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.973226 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-run\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.973256 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-host-slash\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.973306 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-cnibin\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.973333 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.973354 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-kubelet\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.973414 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/46ae7b31-c91c-477e-a04a-a3a8541747be-cnibin\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.973456 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4086d06f-d50e-4632-9da7-508909429eef-host-var-lib-kubelet\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.973481 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysctl-d\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.973524 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-client-certs\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.973589 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/0f16e797-a619-46a8-948a-9fdfc8a9891f-etc-sysctl-d\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.973598 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-root\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.973618 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-var-lock\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:48.973672 master-0 kubenswrapper[28504]: I0318 13:23:48.973642 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-server-tls\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:48.974404 master-0 kubenswrapper[28504]: I0318 13:23:48.973693 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:48.981383 master-0 kubenswrapper[28504]: I0318 13:23:48.981325 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 13:23:49.017099 master-0 kubenswrapper[28504]: I0318 13:23:49.017036 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 13:23:49.024138 master-0 kubenswrapper[28504]: I0318 13:23:49.023101 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ce8e99d-7b02-4bf4-a438-adde851918cb-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:23:49.025043 master-0 kubenswrapper[28504]: I0318 13:23:49.025011 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 13:23:49.029726 master-0 kubenswrapper[28504]: I0318 13:23:49.029693 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:23:49.042250 master-0 kubenswrapper[28504]: I0318 13:23:49.042201 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 13:23:49.062332 master-0 kubenswrapper[28504]: I0318 13:23:49.062267 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 13:23:49.069873 master-0 kubenswrapper[28504]: I0318 13:23:49.063811 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:23:49.078212 master-0 kubenswrapper[28504]: I0318 13:23:49.077991 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-node-exporter-wtmp\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:49.079416 master-0 kubenswrapper[28504]: I0318 13:23:49.079369 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-node-exporter-wtmp\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:49.079735 master-0 kubenswrapper[28504]: I0318 13:23:49.079691 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-root\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:49.079779 master-0 kubenswrapper[28504]: I0318 13:23:49.079745 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-var-lock\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:49.080213 master-0 kubenswrapper[28504]: I0318 13:23:49.079815 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-var-lock\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:49.080213 master-0 kubenswrapper[28504]: I0318 13:23:49.079965 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-root\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:49.080213 master-0 kubenswrapper[28504]: I0318 13:23:49.080102 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:49.080671 master-0 kubenswrapper[28504]: I0318 13:23:49.080348 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:49.080671 master-0 kubenswrapper[28504]: I0318 13:23:49.080398 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:49.080671 master-0 kubenswrapper[28504]: I0318 13:23:49.080523 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-sys\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:49.080791 master-0 kubenswrapper[28504]: I0318 13:23:49.080676 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b856d226-a137-4954-82c5-5929d579387a-sys\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:49.081350 master-0 kubenswrapper[28504]: I0318 13:23:49.081157 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 13:23:49.087711 master-0 kubenswrapper[28504]: I0318 13:23:49.087671 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35925474-e3fe-4cff-aad6-d853816618c7-srv-cert\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:23:49.107885 master-0 kubenswrapper[28504]: I0318 13:23:49.107062 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 13:23:49.130846 master-0 kubenswrapper[28504]: I0318 13:23:49.128095 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 13:23:49.162071 master-0 kubenswrapper[28504]: I0318 13:23:49.161287 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 13:23:49.172412 master-0 kubenswrapper[28504]: I0318 13:23:49.171367 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 13:23:49.178984 master-0 kubenswrapper[28504]: I0318 13:23:49.176401 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad580a2-7f58-4d66-adad-0a53d9777655-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:23:49.182313 master-0 kubenswrapper[28504]: I0318 13:23:49.182286 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 13:23:49.188766 master-0 kubenswrapper[28504]: I0318 13:23:49.188729 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/5.log" Mar 18 13:23:49.189149 master-0 kubenswrapper[28504]: I0318 13:23:49.189119 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/4.log" Mar 18 13:23:49.189421 master-0 kubenswrapper[28504]: I0318 13:23:49.189384 28504 generic.go:334] "Generic (PLEG): container finished" podID="f2b92a53-0b61-4e1d-a306-f9a498e48b38" containerID="0f1b7521916bb1f15f4a8946c701639d4de35a4fc8e0cbdc319661e84db6acb6" exitCode=1 Mar 18 13:23:49.190015 master-0 kubenswrapper[28504]: I0318 13:23:49.189989 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:49.193612 master-0 kubenswrapper[28504]: I0318 13:23:49.192247 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ad580a2-7f58-4d66-adad-0a53d9777655-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:23:49.211379 master-0 kubenswrapper[28504]: I0318 13:23:49.211081 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:49.212878 master-0 kubenswrapper[28504]: I0318 13:23:49.212742 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 13:23:49.220776 master-0 kubenswrapper[28504]: I0318 13:23:49.220733 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 13:23:49.255810 master-0 kubenswrapper[28504]: I0318 13:23:49.252473 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 13:23:49.258223 master-0 kubenswrapper[28504]: I0318 13:23:49.258035 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/369e9689-e2f6-4276-b096-8db094f8d6ae-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:23:49.263008 master-0 kubenswrapper[28504]: I0318 13:23:49.262805 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 13:23:49.263944 master-0 kubenswrapper[28504]: I0318 13:23:49.263860 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/234a5a6c-3790-49d0-b1e7-86f81048d96a-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:23:49.284344 master-0 kubenswrapper[28504]: I0318 13:23:49.284294 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-var-lock\") pod \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " Mar 18 13:23:49.284511 master-0 kubenswrapper[28504]: I0318 13:23:49.284355 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kubelet-dir\") pod \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " Mar 18 13:23:49.284511 master-0 kubenswrapper[28504]: I0318 13:23:49.284416 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-var-lock" (OuterVolumeSpecName: "var-lock") pod "810ed1fb-bd32-4e5d-94e6-011f21ff37d3" (UID: "810ed1fb-bd32-4e5d-94e6-011f21ff37d3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:23:49.284613 master-0 kubenswrapper[28504]: I0318 13:23:49.284542 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "810ed1fb-bd32-4e5d-94e6-011f21ff37d3" (UID: "810ed1fb-bd32-4e5d-94e6-011f21ff37d3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:23:49.287312 master-0 kubenswrapper[28504]: I0318 13:23:49.285049 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 13:23:49.287312 master-0 kubenswrapper[28504]: I0318 13:23:49.285586 28504 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:23:49.287312 master-0 kubenswrapper[28504]: I0318 13:23:49.285604 28504 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:23:49.287312 master-0 kubenswrapper[28504]: I0318 13:23:49.287205 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:23:49.309289 master-0 kubenswrapper[28504]: I0318 13:23:49.307120 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 13:23:49.323571 master-0 kubenswrapper[28504]: I0318 13:23:49.323527 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 13:23:49.330607 master-0 kubenswrapper[28504]: I0318 13:23:49.330146 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/369e9689-e2f6-4276-b096-8db094f8d6ae-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:23:49.345792 master-0 kubenswrapper[28504]: I0318 13:23:49.345702 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 13:23:49.363384 master-0 kubenswrapper[28504]: I0318 13:23:49.363333 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 13:23:49.363786 master-0 kubenswrapper[28504]: I0318 13:23:49.363740 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/234a5a6c-3790-49d0-b1e7-86f81048d96a-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:23:49.387367 master-0 kubenswrapper[28504]: I0318 13:23:49.387311 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 13:23:49.401156 master-0 kubenswrapper[28504]: I0318 13:23:49.401108 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 13:23:49.409645 master-0 kubenswrapper[28504]: I0318 13:23:49.409594 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20dc979a-732b-43b5-acc2-118e4c350470-ovn-node-metrics-cert\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:49.428094 master-0 kubenswrapper[28504]: I0318 13:23:49.427910 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 13:23:49.446775 master-0 kubenswrapper[28504]: I0318 13:23:49.446698 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 13:23:49.453520 master-0 kubenswrapper[28504]: I0318 13:23:49.453484 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20dc979a-732b-43b5-acc2-118e4c350470-ovnkube-script-lib\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:49.474179 master-0 kubenswrapper[28504]: I0318 13:23:49.474114 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 13:23:49.480316 master-0 kubenswrapper[28504]: I0318 13:23:49.478544 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:23:49.486417 master-0 kubenswrapper[28504]: I0318 13:23:49.486362 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 13:23:49.505082 master-0 kubenswrapper[28504]: I0318 13:23:49.505010 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 13:23:49.510253 master-0 kubenswrapper[28504]: I0318 13:23:49.510201 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/330df925-8429-4b96-9bfe-caa017c21afa-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:23:49.521836 master-0 kubenswrapper[28504]: I0318 13:23:49.521761 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 13:23:49.523353 master-0 kubenswrapper[28504]: I0318 13:23:49.523294 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e309570-09d0-412a-a74b-c5397d048a30-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-jjdvw\" (UID: \"7e309570-09d0-412a-a74b-c5397d048a30\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw" Mar 18 13:23:49.548891 master-0 kubenswrapper[28504]: I0318 13:23:49.548831 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 13:23:49.552740 master-0 kubenswrapper[28504]: I0318 13:23:49.551638 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a0944d2-d99a-42eb-81f5-a212b750b8f4-metrics-tls\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:23:49.566867 master-0 kubenswrapper[28504]: I0318 13:23:49.566817 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-ntbvj" Mar 18 13:23:49.587378 master-0 kubenswrapper[28504]: I0318 13:23:49.587327 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 13:23:49.611210 master-0 kubenswrapper[28504]: I0318 13:23:49.611152 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 13:23:49.615290 master-0 kubenswrapper[28504]: I0318 13:23:49.615229 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-etcd-client\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:49.621964 master-0 kubenswrapper[28504]: I0318 13:23:49.621892 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 13:23:49.634586 master-0 kubenswrapper[28504]: I0318 13:23:49.634521 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-encryption-config\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:49.642989 master-0 kubenswrapper[28504]: I0318 13:23:49.642946 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 13:23:49.648959 master-0 kubenswrapper[28504]: I0318 13:23:49.648544 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/375d5112-d2be-47cf-bee1-82614ba71ed8-apiservice-cert\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:23:49.648959 master-0 kubenswrapper[28504]: I0318 13:23:49.648567 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/375d5112-d2be-47cf-bee1-82614ba71ed8-webhook-cert\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:23:49.665398 master-0 kubenswrapper[28504]: I0318 13:23:49.665338 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 13:23:49.669064 master-0 kubenswrapper[28504]: I0318 13:23:49.668991 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b41c9132-92ef-429d-bdd5-9bdb024e04fc-serving-cert\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:49.681487 master-0 kubenswrapper[28504]: I0318 13:23:49.681362 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 13:23:49.688435 master-0 kubenswrapper[28504]: I0318 13:23:49.688373 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-9nw6w\" (UID: \"7fa6920b-f7d9-4758-bba9-356a2c8b1b67\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:23:49.701826 master-0 kubenswrapper[28504]: I0318 13:23:49.701769 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-6l8l5" Mar 18 13:23:49.721706 master-0 kubenswrapper[28504]: I0318 13:23:49.721651 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 13:23:49.729653 master-0 kubenswrapper[28504]: I0318 13:23:49.729614 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d0b174-33e4-46ee-863b-b5cc2a271b85-serving-cert\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:23:49.743977 master-0 kubenswrapper[28504]: I0318 13:23:49.743447 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 13:23:49.754583 master-0 kubenswrapper[28504]: I0318 13:23:49.754535 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-iptables-alerter-script\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:23:49.760116 master-0 kubenswrapper[28504]: I0318 13:23:49.760060 28504 request.go:700] Waited for 1.013788894s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/secrets?fieldSelector=metadata.name%3Dcluster-olm-operator-serving-cert&limit=500&resourceVersion=0 Mar 18 13:23:49.764857 master-0 kubenswrapper[28504]: I0318 13:23:49.764800 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 13:23:49.766651 master-0 kubenswrapper[28504]: I0318 13:23:49.766612 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/16a930da-d793-486f-bcef-cf042d3c427d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:23:49.781907 master-0 kubenswrapper[28504]: I0318 13:23:49.781790 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 13:23:49.787330 master-0 kubenswrapper[28504]: I0318 13:23:49.787278 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb471665-2b07-48df-9881-3fb663390b23-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:23:49.803096 master-0 kubenswrapper[28504]: I0318 13:23:49.803043 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 13:23:49.807828 master-0 kubenswrapper[28504]: I0318 13:23:49.807657 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f82c03-65d1-4a6c-ba09-8a00ae778009-srv-cert\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:23:49.825467 master-0 kubenswrapper[28504]: I0318 13:23:49.825428 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:23:49.828484 master-0 kubenswrapper[28504]: I0318 13:23:49.828453 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5a93d05-3c8e-4666-9a55-d8f9e902db78-serving-cert\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:23:49.843963 master-0 kubenswrapper[28504]: I0318 13:23:49.843901 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-8zrbw" Mar 18 13:23:49.856930 master-0 kubenswrapper[28504]: E0318 13:23:49.856774 28504 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.856930 master-0 kubenswrapper[28504]: E0318 13:23:49.856893 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-images podName:2385db6b-4286-4839-822c-aa9c52290172 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.356866406 +0000 UTC m=+7.851672181 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-images") pod "machine-config-operator-84d549f6d5-6qlqd" (UID: "2385db6b-4286-4839-822c-aa9c52290172") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.857987 master-0 kubenswrapper[28504]: E0318 13:23:49.857876 28504 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.858097 master-0 kubenswrapper[28504]: E0318 13:23:49.858072 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-cert podName:bd033b5b-af07-4e69-9a5c-46f7c9bde95a nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.358029949 +0000 UTC m=+7.852835714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-cert") pod "cluster-autoscaler-operator-866dc4744-q8vxr" (UID: "bd033b5b-af07-4e69-9a5c-46f7c9bde95a") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.858232 master-0 kubenswrapper[28504]: E0318 13:23:49.858175 28504 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.858314 master-0 kubenswrapper[28504]: E0318 13:23:49.858189 28504 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.858559 master-0 kubenswrapper[28504]: E0318 13:23:49.858301 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-service-ca-bundle podName:c074751c-6b3c-44df-aca5-42fa69662779 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.358276676 +0000 UTC m=+7.853082451 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-service-ca-bundle") pod "insights-operator-68bf6ff9d6-ckwz8" (UID: "c074751c-6b3c-44df-aca5-42fa69662779") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.858644 master-0 kubenswrapper[28504]: E0318 13:23:49.858632 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c106be-27ea-4849-b365-eff6d25f5e71-proxy-tls podName:f3c106be-27ea-4849-b365-eff6d25f5e71 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.358619606 +0000 UTC m=+7.853425381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c106be-27ea-4849-b365-eff6d25f5e71-proxy-tls") pod "machine-config-daemon-2qjl7" (UID: "f3c106be-27ea-4849-b365-eff6d25f5e71") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.858704 master-0 kubenswrapper[28504]: E0318 13:23:49.858502 28504 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.858791 master-0 kubenswrapper[28504]: E0318 13:23:49.858780 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-auth-proxy-config podName:734f9f10-5bde-44d5-a831-021b93fd667d nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.35876737 +0000 UTC m=+7.853573145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-auth-proxy-config") pod "machine-approver-5c6485487f-f8zc2" (UID: "734f9f10-5bde-44d5-a831-021b93fd667d") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.858850 master-0 kubenswrapper[28504]: E0318 13:23:49.858520 28504 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.858965 master-0 kubenswrapper[28504]: E0318 13:23:49.858930 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-trusted-ca-bundle podName:c074751c-6b3c-44df-aca5-42fa69662779 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.358919174 +0000 UTC m=+7.853725039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-trusted-ca-bundle") pod "insights-operator-68bf6ff9d6-ckwz8" (UID: "c074751c-6b3c-44df-aca5-42fa69662779") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.859067 master-0 kubenswrapper[28504]: E0318 13:23:49.858546 28504 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.859164 master-0 kubenswrapper[28504]: E0318 13:23:49.859151 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-audit podName:b41c9132-92ef-429d-bdd5-9bdb024e04fc nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.35914086 +0000 UTC m=+7.853946695 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-audit") pod "apiserver-574f6d5bf6-8krhk" (UID: "b41c9132-92ef-429d-bdd5-9bdb024e04fc") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.859238 master-0 kubenswrapper[28504]: E0318 13:23:49.858555 28504 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.859318 master-0 kubenswrapper[28504]: E0318 13:23:49.859308 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-cloud-controller-manager-operator-tls podName:d3f208f9-e2e1-4fae-a47a-f58b722e0ad5 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.359300115 +0000 UTC m=+7.854105970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7dff898856-ncjbh" (UID: "d3f208f9-e2e1-4fae-a47a-f58b722e0ad5") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.859530 master-0 kubenswrapper[28504]: E0318 13:23:49.859515 28504 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.859619 master-0 kubenswrapper[28504]: E0318 13:23:49.859610 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-config podName:a5a93d05-3c8e-4666-9a55-d8f9e902db78 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.359601554 +0000 UTC m=+7.854407399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-config") pod "controller-manager-66b7876dbc-rdzrh" (UID: "a5a93d05-3c8e-4666-9a55-d8f9e902db78") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.859682 master-0 kubenswrapper[28504]: E0318 13:23:49.858596 28504 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.859752 master-0 kubenswrapper[28504]: E0318 13:23:49.859742 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-cco-trusted-ca podName:7fa6920b-f7d9-4758-bba9-356a2c8b1b67 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.359735057 +0000 UTC m=+7.854540832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-cco-trusted-ca") pod "cloud-credential-operator-744f9dbf77-9nw6w" (UID: "7fa6920b-f7d9-4758-bba9-356a2c8b1b67") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.859807 master-0 kubenswrapper[28504]: E0318 13:23:49.859263 28504 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.859886 master-0 kubenswrapper[28504]: E0318 13:23:49.859877 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-node-bootstrap-token podName:02879f34-7062-4f07-9a5a-f965103d9182 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.359870181 +0000 UTC m=+7.854676026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-node-bootstrap-token") pod "machine-config-server-4f5s4" (UID: "02879f34-7062-4f07-9a5a-f965103d9182") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.860007 master-0 kubenswrapper[28504]: E0318 13:23:49.859994 28504 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.860090 master-0 kubenswrapper[28504]: E0318 13:23:49.860081 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-auth-proxy-config podName:d3f208f9-e2e1-4fae-a47a-f58b722e0ad5 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.360073787 +0000 UTC m=+7.854879562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7dff898856-ncjbh" (UID: "d3f208f9-e2e1-4fae-a47a-f58b722e0ad5") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.860575 master-0 kubenswrapper[28504]: E0318 13:23:49.860550 28504 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.860628 master-0 kubenswrapper[28504]: E0318 13:23:49.860609 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2385db6b-4286-4839-822c-aa9c52290172-proxy-tls podName:2385db6b-4286-4839-822c-aa9c52290172 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.360596152 +0000 UTC m=+7.855401927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/2385db6b-4286-4839-822c-aa9c52290172-proxy-tls") pod "machine-config-operator-84d549f6d5-6qlqd" (UID: "2385db6b-4286-4839-822c-aa9c52290172") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.860628 master-0 kubenswrapper[28504]: E0318 13:23:49.860564 28504 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.860695 master-0 kubenswrapper[28504]: E0318 13:23:49.860625 28504 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.860695 master-0 kubenswrapper[28504]: E0318 13:23:49.860648 28504 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.860695 master-0 kubenswrapper[28504]: E0318 13:23:49.860651 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-client-ca podName:a5a93d05-3c8e-4666-9a55-d8f9e902db78 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.360640323 +0000 UTC m=+7.855446108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-client-ca") pod "controller-manager-66b7876dbc-rdzrh" (UID: "a5a93d05-3c8e-4666-9a55-d8f9e902db78") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.860786 master-0 kubenswrapper[28504]: E0318 13:23:49.860718 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c074751c-6b3c-44df-aca5-42fa69662779-serving-cert podName:c074751c-6b3c-44df-aca5-42fa69662779 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.360695965 +0000 UTC m=+7.855501840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c074751c-6b3c-44df-aca5-42fa69662779-serving-cert") pod "insights-operator-68bf6ff9d6-ckwz8" (UID: "c074751c-6b3c-44df-aca5-42fa69662779") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.860786 master-0 kubenswrapper[28504]: E0318 13:23:49.860740 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-auth-proxy-config podName:2385db6b-4286-4839-822c-aa9c52290172 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.360730596 +0000 UTC m=+7.855536461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-auth-proxy-config") pod "machine-config-operator-84d549f6d5-6qlqd" (UID: "2385db6b-4286-4839-822c-aa9c52290172") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.860849 master-0 kubenswrapper[28504]: E0318 13:23:49.860789 28504 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.860876 master-0 kubenswrapper[28504]: E0318 13:23:49.860845 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-etcd-serving-ca podName:b41c9132-92ef-429d-bdd5-9bdb024e04fc nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.360834709 +0000 UTC m=+7.855640564 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-etcd-serving-ca") pod "apiserver-574f6d5bf6-8krhk" (UID: "b41c9132-92ef-429d-bdd5-9bdb024e04fc") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.861037 master-0 kubenswrapper[28504]: E0318 13:23:49.860976 28504 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.861037 master-0 kubenswrapper[28504]: E0318 13:23:49.861006 28504 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.861037 master-0 kubenswrapper[28504]: E0318 13:23:49.861019 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-images podName:d3f208f9-e2e1-4fae-a47a-f58b722e0ad5 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.361009423 +0000 UTC m=+7.855815198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-images") pod "cluster-cloud-controller-manager-operator-7dff898856-ncjbh" (UID: "d3f208f9-e2e1-4fae-a47a-f58b722e0ad5") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.861153 master-0 kubenswrapper[28504]: E0318 13:23:49.861093 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-machine-api-operator-tls podName:d2e2ef3a-a6e9-44dc-93c7-9f533e75502a nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.361081575 +0000 UTC m=+7.855887410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-nf22v" (UID: "d2e2ef3a-a6e9-44dc-93c7-9f533e75502a") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.861192 master-0 kubenswrapper[28504]: E0318 13:23:49.861183 28504 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.861246 master-0 kubenswrapper[28504]: E0318 13:23:49.861219 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-config podName:b41c9132-92ef-429d-bdd5-9bdb024e04fc nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.361209439 +0000 UTC m=+7.856015304 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-config") pod "apiserver-574f6d5bf6-8krhk" (UID: "b41c9132-92ef-429d-bdd5-9bdb024e04fc") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.861292 master-0 kubenswrapper[28504]: E0318 13:23:49.861256 28504 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.861324 master-0 kubenswrapper[28504]: E0318 13:23:49.861313 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-image-import-ca podName:b41c9132-92ef-429d-bdd5-9bdb024e04fc nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.361301712 +0000 UTC m=+7.856107497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-image-import-ca") pod "apiserver-574f6d5bf6-8krhk" (UID: "b41c9132-92ef-429d-bdd5-9bdb024e04fc") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.861392 master-0 kubenswrapper[28504]: E0318 13:23:49.861379 28504 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.861470 master-0 kubenswrapper[28504]: E0318 13:23:49.861460 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-config podName:d2e2ef3a-a6e9-44dc-93c7-9f533e75502a nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.361451046 +0000 UTC m=+7.856256821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-config") pod "machine-api-operator-6fbb6cf6f9-nf22v" (UID: "d2e2ef3a-a6e9-44dc-93c7-9f533e75502a") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.861865 master-0 kubenswrapper[28504]: E0318 13:23:49.861833 28504 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.861960 master-0 kubenswrapper[28504]: E0318 13:23:49.861950 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cb471665-2b07-48df-9881-3fb663390b23-config podName:cb471665-2b07-48df-9881-3fb663390b23 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.361888528 +0000 UTC m=+7.856694363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/cb471665-2b07-48df-9881-3fb663390b23-config") pod "openshift-apiserver-operator-d65958b8-lwfvl" (UID: "cb471665-2b07-48df-9881-3fb663390b23") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.862705 master-0 kubenswrapper[28504]: E0318 13:23:49.862676 28504 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.862752 master-0 kubenswrapper[28504]: E0318 13:23:49.862738 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-proxy-ca-bundles podName:a5a93d05-3c8e-4666-9a55-d8f9e902db78 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.362725642 +0000 UTC m=+7.857531487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-proxy-ca-bundles") pod "controller-manager-66b7876dbc-rdzrh" (UID: "a5a93d05-3c8e-4666-9a55-d8f9e902db78") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.862791 master-0 kubenswrapper[28504]: I0318 13:23:49.862748 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 13:23:49.862839 master-0 kubenswrapper[28504]: E0318 13:23:49.862681 28504 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.862922 master-0 kubenswrapper[28504]: E0318 13:23:49.862911 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-certs podName:02879f34-7062-4f07-9a5a-f965103d9182 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.362900987 +0000 UTC m=+7.857706762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-certs") pod "machine-config-server-4f5s4" (UID: "02879f34-7062-4f07-9a5a-f965103d9182") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.863890 master-0 kubenswrapper[28504]: E0318 13:23:49.863877 28504 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.864025 master-0 kubenswrapper[28504]: E0318 13:23:49.863923 28504 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.864080 master-0 kubenswrapper[28504]: E0318 13:23:49.864005 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17adbc1a-f29c-4278-b29a-0cc3879b753f-proxy-tls podName:17adbc1a-f29c-4278-b29a-0cc3879b753f nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.363996198 +0000 UTC m=+7.858801973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/17adbc1a-f29c-4278-b29a-0cc3879b753f-proxy-tls") pod "machine-config-controller-b4f87c5b9-qpp2s" (UID: "17adbc1a-f29c-4278-b29a-0cc3879b753f") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.864080 master-0 kubenswrapper[28504]: E0318 13:23:49.864065 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-config podName:734f9f10-5bde-44d5-a831-021b93fd667d nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.36405297 +0000 UTC m=+7.858858815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-config") pod "machine-approver-5c6485487f-f8zc2" (UID: "734f9f10-5bde-44d5-a831-021b93fd667d") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.865069 master-0 kubenswrapper[28504]: E0318 13:23:49.865034 28504 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.865127 master-0 kubenswrapper[28504]: E0318 13:23:49.865105 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c106be-27ea-4849-b365-eff6d25f5e71-mcd-auth-proxy-config podName:f3c106be-27ea-4849-b365-eff6d25f5e71 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.365081119 +0000 UTC m=+7.859886974 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/f3c106be-27ea-4849-b365-eff6d25f5e71-mcd-auth-proxy-config") pod "machine-config-daemon-2qjl7" (UID: "f3c106be-27ea-4849-b365-eff6d25f5e71") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.865217 master-0 kubenswrapper[28504]: E0318 13:23:49.865203 28504 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.865298 master-0 kubenswrapper[28504]: E0318 13:23:49.865288 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-auth-proxy-config podName:bd033b5b-af07-4e69-9a5c-46f7c9bde95a nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.365279794 +0000 UTC m=+7.860085569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-auth-proxy-config") pod "cluster-autoscaler-operator-866dc4744-q8vxr" (UID: "bd033b5b-af07-4e69-9a5c-46f7c9bde95a") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.865363 master-0 kubenswrapper[28504]: E0318 13:23:49.865237 28504 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.865439 master-0 kubenswrapper[28504]: E0318 13:23:49.865429 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-images podName:d2e2ef3a-a6e9-44dc-93c7-9f533e75502a nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.365418608 +0000 UTC m=+7.860224373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-images") pod "machine-api-operator-6fbb6cf6f9-nf22v" (UID: "d2e2ef3a-a6e9-44dc-93c7-9f533e75502a") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.865509 master-0 kubenswrapper[28504]: E0318 13:23:49.865426 28504 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.865596 master-0 kubenswrapper[28504]: E0318 13:23:49.865586 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/734f9f10-5bde-44d5-a831-021b93fd667d-machine-approver-tls podName:734f9f10-5bde-44d5-a831-021b93fd667d nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.365579193 +0000 UTC m=+7.860384968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/734f9f10-5bde-44d5-a831-021b93fd667d-machine-approver-tls") pod "machine-approver-5c6485487f-f8zc2" (UID: "734f9f10-5bde-44d5-a831-021b93fd667d") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.866721 master-0 kubenswrapper[28504]: E0318 13:23:49.866683 28504 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.866786 master-0 kubenswrapper[28504]: E0318 13:23:49.866767 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17adbc1a-f29c-4278-b29a-0cc3879b753f-mcc-auth-proxy-config podName:17adbc1a-f29c-4278-b29a-0cc3879b753f nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.366752036 +0000 UTC m=+7.861557851 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/17adbc1a-f29c-4278-b29a-0cc3879b753f-mcc-auth-proxy-config") pod "machine-config-controller-b4f87c5b9-qpp2s" (UID: "17adbc1a-f29c-4278-b29a-0cc3879b753f") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.866877 master-0 kubenswrapper[28504]: E0318 13:23:49.866863 28504 configmap.go:193] Couldn't get configMap openshift-cluster-version/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.867000 master-0 kubenswrapper[28504]: E0318 13:23:49.866982 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4d0b174-33e4-46ee-863b-b5cc2a271b85-service-ca podName:e4d0b174-33e4-46ee-863b-b5cc2a271b85 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.366970302 +0000 UTC m=+7.861776147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/e4d0b174-33e4-46ee-863b-b5cc2a271b85-service-ca") pod "cluster-version-operator-7d58488df-2bmkn" (UID: "e4d0b174-33e4-46ee-863b-b5cc2a271b85") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.867091 master-0 kubenswrapper[28504]: E0318 13:23:49.866916 28504 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.867169 master-0 kubenswrapper[28504]: E0318 13:23:49.867160 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-trusted-ca-bundle podName:b41c9132-92ef-429d-bdd5-9bdb024e04fc nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.367148607 +0000 UTC m=+7.861954382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-trusted-ca-bundle") pod "apiserver-574f6d5bf6-8krhk" (UID: "b41c9132-92ef-429d-bdd5-9bdb024e04fc") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.884202 master-0 kubenswrapper[28504]: I0318 13:23:49.882389 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-rm9sr" Mar 18 13:23:49.907212 master-0 kubenswrapper[28504]: I0318 13:23:49.907166 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-wl4c6" Mar 18 13:23:49.922441 master-0 kubenswrapper[28504]: I0318 13:23:49.922397 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-2d45m" Mar 18 13:23:49.962098 master-0 kubenswrapper[28504]: I0318 13:23:49.962029 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 13:23:49.970206 master-0 kubenswrapper[28504]: E0318 13:23:49.970160 28504 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.970349 master-0 kubenswrapper[28504]: E0318 13:23:49.970268 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-kube-rbac-proxy-config podName:b856d226-a137-4954-82c5-5929d579387a nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.47024591 +0000 UTC m=+7.965051685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-kube-rbac-proxy-config") pod "node-exporter-f55c6" (UID: "b856d226-a137-4954-82c5-5929d579387a") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.973035 master-0 kubenswrapper[28504]: E0318 13:23:49.970723 28504 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.973035 master-0 kubenswrapper[28504]: E0318 13:23:49.970755 28504 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.973035 master-0 kubenswrapper[28504]: E0318 13:23:49.970777 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-tls podName:6ed4f640-d481-4e7a-92eb-f0eda82e138c nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.470768624 +0000 UTC m=+7.965574389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-tls") pod "kube-state-metrics-7bbc969446-dldw9" (UID: "6ed4f640-d481-4e7a-92eb-f0eda82e138c") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.973035 master-0 kubenswrapper[28504]: E0318 13:23:49.970774 28504 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.973035 master-0 kubenswrapper[28504]: E0318 13:23:49.970806 28504 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.973035 master-0 kubenswrapper[28504]: E0318 13:23:49.970809 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-configmap-kubelet-serving-ca-bundle podName:b79758b7-9129-496c-abec-80d455648454 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.470798415 +0000 UTC m=+7.965604190 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-configmap-kubelet-serving-ca-bundle") pod "metrics-server-648866dd9c-ztkrd" (UID: "b79758b7-9129-496c-abec-80d455648454") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.973035 master-0 kubenswrapper[28504]: E0318 13:23:49.970845 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-kube-rbac-proxy-config podName:3c0d0048-6d96-459c-8742-2f092af44a6a nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.470837316 +0000 UTC m=+7.965643091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-5dc6c74576-bshl9" (UID: "3c0d0048-6d96-459c-8742-2f092af44a6a") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.973035 master-0 kubenswrapper[28504]: E0318 13:23:49.970722 28504 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.973035 master-0 kubenswrapper[28504]: E0318 13:23:49.970863 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-tls podName:b856d226-a137-4954-82c5-5929d579387a nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.470855007 +0000 UTC m=+7.965660782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-tls") pod "node-exporter-f55c6" (UID: "b856d226-a137-4954-82c5-5929d579387a") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.973035 master-0 kubenswrapper[28504]: E0318 13:23:49.970901 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a715e53-1874-4993-93d1-504c3470a6f5-metrics-client-ca podName:5a715e53-1874-4993-93d1-504c3470a6f5 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.470882228 +0000 UTC m=+7.965688003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/5a715e53-1874-4993-93d1-504c3470a6f5-metrics-client-ca") pod "prometheus-operator-6c8df6d4b-6twz2" (UID: "5a715e53-1874-4993-93d1-504c3470a6f5") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.973366 master-0 kubenswrapper[28504]: E0318 13:23:49.973147 28504 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.973366 master-0 kubenswrapper[28504]: E0318 13:23:49.973179 28504 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.973366 master-0 kubenswrapper[28504]: E0318 13:23:49.973193 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b856d226-a137-4954-82c5-5929d579387a-metrics-client-ca podName:b856d226-a137-4954-82c5-5929d579387a nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.473181833 +0000 UTC m=+7.967987668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/b856d226-a137-4954-82c5-5929d579387a-metrics-client-ca") pod "node-exporter-f55c6" (UID: "b856d226-a137-4954-82c5-5929d579387a") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.973366 master-0 kubenswrapper[28504]: E0318 13:23:49.973224 28504 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.973366 master-0 kubenswrapper[28504]: E0318 13:23:49.973236 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-webhook-certs podName:bc9af4af-fb39-4a51-83ae-dab3f1d159f2 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.473223914 +0000 UTC m=+7.968029759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-webhook-certs") pod "multus-admission-controller-58c9f8fc64-bnrjt" (UID: "bc9af4af-fb39-4a51-83ae-dab3f1d159f2") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.973366 master-0 kubenswrapper[28504]: E0318 13:23:49.973254 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-metrics-client-ca podName:6ed4f640-d481-4e7a-92eb-f0eda82e138c nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.473246125 +0000 UTC m=+7.968052000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-metrics-client-ca") pod "kube-state-metrics-7bbc969446-dldw9" (UID: "6ed4f640-d481-4e7a-92eb-f0eda82e138c") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.973366 master-0 kubenswrapper[28504]: E0318 13:23:49.973271 28504 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.973366 master-0 kubenswrapper[28504]: E0318 13:23:49.973295 28504 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.973366 master-0 kubenswrapper[28504]: E0318 13:23:49.973303 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-tls podName:3c0d0048-6d96-459c-8742-2f092af44a6a nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.473294966 +0000 UTC m=+7.968100831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-tls") pod "openshift-state-metrics-5dc6c74576-bshl9" (UID: "3c0d0048-6d96-459c-8742-2f092af44a6a") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.973366 master-0 kubenswrapper[28504]: E0318 13:23:49.973323 28504 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.973366 master-0 kubenswrapper[28504]: E0318 13:23:49.973274 28504 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2dpn1smcfbjnb: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.973366 master-0 kubenswrapper[28504]: E0318 13:23:49.973348 28504 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.973366 master-0 kubenswrapper[28504]: E0318 13:23:49.973326 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-custom-resource-state-configmap podName:6ed4f640-d481-4e7a-92eb-f0eda82e138c nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.473317827 +0000 UTC m=+7.968123702 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7bbc969446-dldw9" (UID: "6ed4f640-d481-4e7a-92eb-f0eda82e138c") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.973759 master-0 kubenswrapper[28504]: E0318 13:23:49.973386 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-kube-rbac-proxy-config podName:5a715e53-1874-4993-93d1-504c3470a6f5 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.473378398 +0000 UTC m=+7.968184273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-6c8df6d4b-6twz2" (UID: "5a715e53-1874-4993-93d1-504c3470a6f5") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.973759 master-0 kubenswrapper[28504]: E0318 13:23:49.973398 28504 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.973759 master-0 kubenswrapper[28504]: E0318 13:23:49.973400 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle podName:b79758b7-9129-496c-abec-80d455648454 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.473394279 +0000 UTC m=+7.968200154 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle") pod "metrics-server-648866dd9c-ztkrd" (UID: "b79758b7-9129-496c-abec-80d455648454") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.973759 master-0 kubenswrapper[28504]: E0318 13:23:49.973418 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-metrics-server-audit-profiles podName:b79758b7-9129-496c-abec-80d455648454 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.473410709 +0000 UTC m=+7.968216594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-metrics-server-audit-profiles") pod "metrics-server-648866dd9c-ztkrd" (UID: "b79758b7-9129-496c-abec-80d455648454") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.973759 master-0 kubenswrapper[28504]: E0318 13:23:49.973431 28504 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.973759 master-0 kubenswrapper[28504]: E0318 13:23:49.973433 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3c0d0048-6d96-459c-8742-2f092af44a6a-metrics-client-ca podName:3c0d0048-6d96-459c-8742-2f092af44a6a nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.47342737 +0000 UTC m=+7.968233145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/3c0d0048-6d96-459c-8742-2f092af44a6a-metrics-client-ca") pod "openshift-state-metrics-5dc6c74576-bshl9" (UID: "3c0d0048-6d96-459c-8742-2f092af44a6a") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:49.973759 master-0 kubenswrapper[28504]: E0318 13:23:49.973457 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-tls podName:5a715e53-1874-4993-93d1-504c3470a6f5 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.47344908 +0000 UTC m=+7.968254955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-6twz2" (UID: "5a715e53-1874-4993-93d1-504c3470a6f5") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.976012 master-0 kubenswrapper[28504]: E0318 13:23:49.974543 28504 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.976012 master-0 kubenswrapper[28504]: E0318 13:23:49.974602 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-kube-rbac-proxy-config podName:6ed4f640-d481-4e7a-92eb-f0eda82e138c nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.474589753 +0000 UTC m=+7.969395608 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7bbc969446-dldw9" (UID: "6ed4f640-d481-4e7a-92eb-f0eda82e138c") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.976012 master-0 kubenswrapper[28504]: E0318 13:23:49.975029 28504 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.976012 master-0 kubenswrapper[28504]: E0318 13:23:49.975073 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-client-certs podName:b79758b7-9129-496c-abec-80d455648454 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.475062366 +0000 UTC m=+7.969868221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-client-certs") pod "metrics-server-648866dd9c-ztkrd" (UID: "b79758b7-9129-496c-abec-80d455648454") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.976012 master-0 kubenswrapper[28504]: E0318 13:23:49.975109 28504 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.976012 master-0 kubenswrapper[28504]: E0318 13:23:49.975140 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-server-tls podName:b79758b7-9129-496c-abec-80d455648454 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:50.475132328 +0000 UTC m=+7.969938193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-server-tls") pod "metrics-server-648866dd9c-ztkrd" (UID: "b79758b7-9129-496c-abec-80d455648454") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:49.986013 master-0 kubenswrapper[28504]: I0318 13:23:49.983718 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 13:23:50.002869 master-0 kubenswrapper[28504]: I0318 13:23:50.002664 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 13:23:50.021516 master-0 kubenswrapper[28504]: I0318 13:23:50.021476 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 13:23:50.042229 master-0 kubenswrapper[28504]: I0318 13:23:50.042182 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 13:23:50.126067 master-0 kubenswrapper[28504]: I0318 13:23:50.124257 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 13:23:50.126067 master-0 kubenswrapper[28504]: I0318 13:23:50.124503 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 13:23:50.143406 master-0 kubenswrapper[28504]: I0318 13:23:50.143359 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 13:23:50.145106 master-0 kubenswrapper[28504]: I0318 13:23:50.145070 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 13:23:50.150578 master-0 kubenswrapper[28504]: I0318 13:23:50.150510 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 13:23:50.165489 master-0 kubenswrapper[28504]: I0318 13:23:50.165432 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 13:23:50.211157 master-0 kubenswrapper[28504]: I0318 13:23:50.211108 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 13:23:50.247970 master-0 kubenswrapper[28504]: I0318 13:23:50.247924 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 13:23:50.249327 master-0 kubenswrapper[28504]: I0318 13:23:50.249311 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 13:23:50.263122 master-0 kubenswrapper[28504]: I0318 13:23:50.262815 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 13:23:50.263397 master-0 kubenswrapper[28504]: I0318 13:23:50.263370 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 13:23:50.280989 master-0 kubenswrapper[28504]: I0318 13:23:50.280628 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:50.290516 master-0 kubenswrapper[28504]: I0318 13:23:50.288591 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:23:50.301507 master-0 kubenswrapper[28504]: I0318 13:23:50.301427 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 13:23:50.323359 master-0 kubenswrapper[28504]: I0318 13:23:50.323302 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 13:23:50.342489 master-0 kubenswrapper[28504]: I0318 13:23:50.342190 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 13:23:50.358632 master-0 kubenswrapper[28504]: I0318 13:23:50.358590 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:23:50.358969 master-0 kubenswrapper[28504]: I0318 13:23:50.358850 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c106be-27ea-4849-b365-eff6d25f5e71-proxy-tls\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:23:50.359736 master-0 kubenswrapper[28504]: I0318 13:23:50.359715 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:23:50.359838 master-0 kubenswrapper[28504]: I0318 13:23:50.359824 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-images\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:23:50.359964 master-0 kubenswrapper[28504]: I0318 13:23:50.359950 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-9nw6w\" (UID: \"7fa6920b-f7d9-4758-bba9-356a2c8b1b67\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:23:50.360089 master-0 kubenswrapper[28504]: I0318 13:23:50.360073 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-auth-proxy-config\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:23:50.360208 master-0 kubenswrapper[28504]: I0318 13:23:50.360195 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-cert\") pod \"cluster-autoscaler-operator-866dc4744-q8vxr\" (UID: \"bd033b5b-af07-4e69-9a5c-46f7c9bde95a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:23:50.360573 master-0 kubenswrapper[28504]: I0318 13:23:50.360560 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-cert\") pod \"cluster-autoscaler-operator-866dc4744-q8vxr\" (UID: \"bd033b5b-af07-4e69-9a5c-46f7c9bde95a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:23:50.360979 master-0 kubenswrapper[28504]: I0318 13:23:50.360962 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-9nw6w\" (UID: \"7fa6920b-f7d9-4758-bba9-356a2c8b1b67\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:23:50.364320 master-0 kubenswrapper[28504]: I0318 13:23:50.364272 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 13:23:50.383276 master-0 kubenswrapper[28504]: I0318 13:23:50.383230 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 13:23:50.402948 master-0 kubenswrapper[28504]: I0318 13:23:50.402895 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-68f42" Mar 18 13:23:50.420913 master-0 kubenswrapper[28504]: I0318 13:23:50.420864 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:23:50.441322 master-0 kubenswrapper[28504]: I0318 13:23:50.441269 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:23:50.461631 master-0 kubenswrapper[28504]: I0318 13:23:50.461586 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-config\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:23:50.461874 master-0 kubenswrapper[28504]: I0318 13:23:50.461852 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:23:50.462011 master-0 kubenswrapper[28504]: I0318 13:23:50.461995 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-node-bootstrap-token\") pod \"machine-config-server-4f5s4\" (UID: \"02879f34-7062-4f07-9a5a-f965103d9182\") " pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:23:50.462105 master-0 kubenswrapper[28504]: I0318 13:23:50.462067 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-config\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:23:50.462237 master-0 kubenswrapper[28504]: I0318 13:23:50.462220 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-audit\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:50.462374 master-0 kubenswrapper[28504]: I0318 13:23:50.462356 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:23:50.462475 master-0 kubenswrapper[28504]: I0318 13:23:50.462445 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-audit\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:50.462526 master-0 kubenswrapper[28504]: I0318 13:23:50.462361 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:23:50.462612 master-0 kubenswrapper[28504]: I0318 13:23:50.462596 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:23:50.462867 master-0 kubenswrapper[28504]: I0318 13:23:50.462835 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb471665-2b07-48df-9881-3fb663390b23-config\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:23:50.462997 master-0 kubenswrapper[28504]: I0318 13:23:50.462960 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-certs\") pod \"machine-config-server-4f5s4\" (UID: \"02879f34-7062-4f07-9a5a-f965103d9182\") " pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:23:50.463047 master-0 kubenswrapper[28504]: I0318 13:23:50.463033 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-proxy-ca-bundles\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:23:50.463120 master-0 kubenswrapper[28504]: I0318 13:23:50.463100 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/17adbc1a-f29c-4278-b29a-0cc3879b753f-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-qpp2s\" (UID: \"17adbc1a-f29c-4278-b29a-0cc3879b753f\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:23:50.463177 master-0 kubenswrapper[28504]: I0318 13:23:50.463134 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-config\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:23:50.463400 master-0 kubenswrapper[28504]: I0318 13:23:50.463381 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c106be-27ea-4849-b365-eff6d25f5e71-mcd-auth-proxy-config\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:23:50.463556 master-0 kubenswrapper[28504]: I0318 13:23:50.463437 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb471665-2b07-48df-9881-3fb663390b23-config\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:23:50.463617 master-0 kubenswrapper[28504]: I0318 13:23:50.463544 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-images\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:23:50.463721 master-0 kubenswrapper[28504]: I0318 13:23:50.463707 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/734f9f10-5bde-44d5-a831-021b93fd667d-machine-approver-tls\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:23:50.464103 master-0 kubenswrapper[28504]: I0318 13:23:50.464088 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/17adbc1a-f29c-4278-b29a-0cc3879b753f-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-qpp2s\" (UID: \"17adbc1a-f29c-4278-b29a-0cc3879b753f\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:23:50.464224 master-0 kubenswrapper[28504]: I0318 13:23:50.464211 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-q8vxr\" (UID: \"bd033b5b-af07-4e69-9a5c-46f7c9bde95a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:23:50.464358 master-0 kubenswrapper[28504]: I0318 13:23:50.464345 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e4d0b174-33e4-46ee-863b-b5cc2a271b85-service-ca\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:23:50.464495 master-0 kubenswrapper[28504]: I0318 13:23:50.464475 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-trusted-ca-bundle\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:50.464707 master-0 kubenswrapper[28504]: I0318 13:23:50.464686 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-etcd-serving-ca\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:50.464844 master-0 kubenswrapper[28504]: I0318 13:23:50.464815 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c074751c-6b3c-44df-aca5-42fa69662779-serving-cert\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:23:50.465006 master-0 kubenswrapper[28504]: I0318 13:23:50.464919 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-etcd-serving-ca\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:50.465071 master-0 kubenswrapper[28504]: I0318 13:23:50.464785 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-trusted-ca-bundle\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:50.465071 master-0 kubenswrapper[28504]: I0318 13:23:50.464505 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e4d0b174-33e4-46ee-863b-b5cc2a271b85-service-ca\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:23:50.465135 master-0 kubenswrapper[28504]: I0318 13:23:50.464476 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-q8vxr\" (UID: \"bd033b5b-af07-4e69-9a5c-46f7c9bde95a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:23:50.465250 master-0 kubenswrapper[28504]: I0318 13:23:50.465233 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-image-import-ca\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:50.465340 master-0 kubenswrapper[28504]: I0318 13:23:50.465328 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-config\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:23:50.465456 master-0 kubenswrapper[28504]: I0318 13:23:50.465414 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-image-import-ca\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:50.465456 master-0 kubenswrapper[28504]: I0318 13:23:50.465433 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-config\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:50.465693 master-0 kubenswrapper[28504]: I0318 13:23:50.465665 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:23:50.465744 master-0 kubenswrapper[28504]: I0318 13:23:50.465714 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:23:50.465744 master-0 kubenswrapper[28504]: I0318 13:23:50.465736 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-client-ca\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:23:50.465813 master-0 kubenswrapper[28504]: I0318 13:23:50.465760 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2385db6b-4286-4839-822c-aa9c52290172-proxy-tls\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:23:50.465971 master-0 kubenswrapper[28504]: I0318 13:23:50.465931 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b41c9132-92ef-429d-bdd5-9bdb024e04fc-config\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:50.466093 master-0 kubenswrapper[28504]: I0318 13:23:50.465979 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-client-ca\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:23:50.488023 master-0 kubenswrapper[28504]: I0318 13:23:50.487902 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:23:50.493886 master-0 kubenswrapper[28504]: I0318 13:23:50.493860 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-proxy-ca-bundles\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:23:50.506950 master-0 kubenswrapper[28504]: I0318 13:23:50.503133 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:23:50.521073 master-0 kubenswrapper[28504]: I0318 13:23:50.520981 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-2bvqk" Mar 18 13:23:50.542213 master-0 kubenswrapper[28504]: I0318 13:23:50.542151 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 13:23:50.549476 master-0 kubenswrapper[28504]: I0318 13:23:50.549422 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:23:50.560608 master-0 kubenswrapper[28504]: I0318 13:23:50.560565 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 13:23:50.567125 master-0 kubenswrapper[28504]: I0318 13:23:50.567075 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:50.567305 master-0 kubenswrapper[28504]: I0318 13:23:50.567144 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5a715e53-1874-4993-93d1-504c3470a6f5-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:23:50.567423 master-0 kubenswrapper[28504]: I0318 13:23:50.567388 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:23:50.567485 master-0 kubenswrapper[28504]: I0318 13:23:50.567463 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:50.567558 master-0 kubenswrapper[28504]: I0318 13:23:50.567535 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:23:50.567595 master-0 kubenswrapper[28504]: I0318 13:23:50.567561 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:50.567782 master-0 kubenswrapper[28504]: I0318 13:23:50.567754 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-metrics-server-audit-profiles\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:50.567885 master-0 kubenswrapper[28504]: I0318 13:23:50.567869 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-bnrjt\" (UID: \"bc9af4af-fb39-4a51-83ae-dab3f1d159f2\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" Mar 18 13:23:50.568020 master-0 kubenswrapper[28504]: I0318 13:23:50.567991 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:23:50.568072 master-0 kubenswrapper[28504]: I0318 13:23:50.568045 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:50.568072 master-0 kubenswrapper[28504]: I0318 13:23:50.568070 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3c0d0048-6d96-459c-8742-2f092af44a6a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:23:50.568155 master-0 kubenswrapper[28504]: I0318 13:23:50.568138 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b856d226-a137-4954-82c5-5929d579387a-metrics-client-ca\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:50.568266 master-0 kubenswrapper[28504]: I0318 13:23:50.568251 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:23:50.568358 master-0 kubenswrapper[28504]: I0318 13:23:50.568292 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-client-certs\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:50.568453 master-0 kubenswrapper[28504]: I0318 13:23:50.568434 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-server-tls\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:50.568545 master-0 kubenswrapper[28504]: I0318 13:23:50.568524 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:50.568611 master-0 kubenswrapper[28504]: I0318 13:23:50.568563 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-tls\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:50.568646 master-0 kubenswrapper[28504]: I0318 13:23:50.568609 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:50.568718 master-0 kubenswrapper[28504]: I0318 13:23:50.568701 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:50.581530 master-0 kubenswrapper[28504]: I0318 13:23:50.581478 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 13:23:50.585469 master-0 kubenswrapper[28504]: I0318 13:23:50.585431 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c074751c-6b3c-44df-aca5-42fa69662779-serving-cert\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:23:50.601156 master-0 kubenswrapper[28504]: I0318 13:23:50.601047 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 13:23:50.626971 master-0 kubenswrapper[28504]: I0318 13:23:50.626909 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 13:23:50.631596 master-0 kubenswrapper[28504]: I0318 13:23:50.631566 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c074751c-6b3c-44df-aca5-42fa69662779-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:23:50.642401 master-0 kubenswrapper[28504]: I0318 13:23:50.642361 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 13:23:50.646885 master-0 kubenswrapper[28504]: I0318 13:23:50.646847 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:23:50.662159 master-0 kubenswrapper[28504]: I0318 13:23:50.662105 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-spdqf" Mar 18 13:23:50.681849 master-0 kubenswrapper[28504]: I0318 13:23:50.681788 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 13:23:50.686536 master-0 kubenswrapper[28504]: I0318 13:23:50.686494 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-config\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:23:50.702236 master-0 kubenswrapper[28504]: I0318 13:23:50.702168 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 13:23:50.704543 master-0 kubenswrapper[28504]: I0318 13:23:50.704505 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-images\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:23:50.722683 master-0 kubenswrapper[28504]: I0318 13:23:50.722631 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 13:23:50.731250 master-0 kubenswrapper[28504]: I0318 13:23:50.731191 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-images\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:23:50.741748 master-0 kubenswrapper[28504]: I0318 13:23:50.741655 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-2vnp2" Mar 18 13:23:50.761167 master-0 kubenswrapper[28504]: I0318 13:23:50.760888 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 13:23:50.767077 master-0 kubenswrapper[28504]: I0318 13:23:50.767032 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2385db6b-4286-4839-822c-aa9c52290172-proxy-tls\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:23:50.780569 master-0 kubenswrapper[28504]: I0318 13:23:50.780432 28504 request.go:700] Waited for 2.000072208s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0 Mar 18 13:23:50.782014 master-0 kubenswrapper[28504]: I0318 13:23:50.781923 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 13:23:50.784092 master-0 kubenswrapper[28504]: I0318 13:23:50.784037 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c106be-27ea-4849-b365-eff6d25f5e71-mcd-auth-proxy-config\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:23:50.784179 master-0 kubenswrapper[28504]: I0318 13:23:50.784109 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2385db6b-4286-4839-822c-aa9c52290172-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:23:50.784365 master-0 kubenswrapper[28504]: I0318 13:23:50.784329 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/17adbc1a-f29c-4278-b29a-0cc3879b753f-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-qpp2s\" (UID: \"17adbc1a-f29c-4278-b29a-0cc3879b753f\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:23:50.801544 master-0 kubenswrapper[28504]: I0318 13:23:50.801477 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 13:23:50.821298 master-0 kubenswrapper[28504]: I0318 13:23:50.821250 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 13:23:50.841926 master-0 kubenswrapper[28504]: I0318 13:23:50.841873 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 13:23:50.861184 master-0 kubenswrapper[28504]: I0318 13:23:50.861111 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-crbnv" Mar 18 13:23:50.882933 master-0 kubenswrapper[28504]: I0318 13:23:50.882180 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 13:23:50.885234 master-0 kubenswrapper[28504]: I0318 13:23:50.885183 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/734f9f10-5bde-44d5-a831-021b93fd667d-machine-approver-tls\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:23:50.901987 master-0 kubenswrapper[28504]: I0318 13:23:50.901917 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 13:23:50.914426 master-0 kubenswrapper[28504]: I0318 13:23:50.911742 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-auth-proxy-config\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:23:50.921731 master-0 kubenswrapper[28504]: I0318 13:23:50.921684 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 13:23:50.924026 master-0 kubenswrapper[28504]: I0318 13:23:50.923988 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734f9f10-5bde-44d5-a831-021b93fd667d-config\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:23:50.941498 master-0 kubenswrapper[28504]: I0318 13:23:50.941416 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 13:23:50.962768 master-0 kubenswrapper[28504]: I0318 13:23:50.962666 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 13:23:50.971086 master-0 kubenswrapper[28504]: I0318 13:23:50.971021 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c106be-27ea-4849-b365-eff6d25f5e71-proxy-tls\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:23:50.981841 master-0 kubenswrapper[28504]: I0318 13:23:50.981781 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-xvxxf" Mar 18 13:23:51.001783 master-0 kubenswrapper[28504]: I0318 13:23:51.001661 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 13:23:51.002960 master-0 kubenswrapper[28504]: I0318 13:23:51.002875 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:23:51.022752 master-0 kubenswrapper[28504]: I0318 13:23:51.022673 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lsr6r" Mar 18 13:23:51.042245 master-0 kubenswrapper[28504]: I0318 13:23:51.042186 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:23:51.061157 master-0 kubenswrapper[28504]: I0318 13:23:51.061077 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:23:51.082929 master-0 kubenswrapper[28504]: I0318 13:23:51.082866 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 13:23:51.093715 master-0 kubenswrapper[28504]: I0318 13:23:51.093664 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:23:51.104801 master-0 kubenswrapper[28504]: I0318 13:23:51.104672 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 13:23:51.107003 master-0 kubenswrapper[28504]: I0318 13:23:51.106927 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:23:51.122199 master-0 kubenswrapper[28504]: I0318 13:23:51.122151 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-rxnwp" Mar 18 13:23:51.143876 master-0 kubenswrapper[28504]: I0318 13:23:51.143592 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 13:23:51.153998 master-0 kubenswrapper[28504]: I0318 13:23:51.153791 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/17adbc1a-f29c-4278-b29a-0cc3879b753f-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-qpp2s\" (UID: \"17adbc1a-f29c-4278-b29a-0cc3879b753f\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:23:51.162979 master-0 kubenswrapper[28504]: I0318 13:23:51.162645 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 13:23:51.163294 master-0 kubenswrapper[28504]: I0318 13:23:51.163188 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-node-bootstrap-token\") pod \"machine-config-server-4f5s4\" (UID: \"02879f34-7062-4f07-9a5a-f965103d9182\") " pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:23:51.182468 master-0 kubenswrapper[28504]: I0318 13:23:51.182413 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 13:23:51.184036 master-0 kubenswrapper[28504]: I0318 13:23:51.183982 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/02879f34-7062-4f07-9a5a-f965103d9182-certs\") pod \"machine-config-server-4f5s4\" (UID: \"02879f34-7062-4f07-9a5a-f965103d9182\") " pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:23:51.202914 master-0 kubenswrapper[28504]: I0318 13:23:51.202855 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 13:23:51.209239 master-0 kubenswrapper[28504]: I0318 13:23:51.208640 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:51.209239 master-0 kubenswrapper[28504]: I0318 13:23:51.208719 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3c0d0048-6d96-459c-8742-2f092af44a6a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:23:51.209239 master-0 kubenswrapper[28504]: I0318 13:23:51.208729 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b856d226-a137-4954-82c5-5929d579387a-metrics-client-ca\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:51.209239 master-0 kubenswrapper[28504]: I0318 13:23:51.208886 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5a715e53-1874-4993-93d1-504c3470a6f5-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:23:51.221108 master-0 kubenswrapper[28504]: I0318 13:23:51.221027 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-qmkzj" Mar 18 13:23:51.241498 master-0 kubenswrapper[28504]: I0318 13:23:51.241405 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 13:23:51.249196 master-0 kubenswrapper[28504]: I0318 13:23:51.249139 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-tls\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:51.261299 master-0 kubenswrapper[28504]: I0318 13:23:51.261183 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 13:23:51.269062 master-0 kubenswrapper[28504]: I0318 13:23:51.269013 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b856d226-a137-4954-82c5-5929d579387a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:51.281477 master-0 kubenswrapper[28504]: I0318 13:23:51.281418 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-nq9c8" Mar 18 13:23:51.301814 master-0 kubenswrapper[28504]: I0318 13:23:51.301744 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 13:23:51.308068 master-0 kubenswrapper[28504]: I0318 13:23:51.307982 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:23:51.321536 master-0 kubenswrapper[28504]: I0318 13:23:51.321277 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-j868d" Mar 18 13:23:51.342315 master-0 kubenswrapper[28504]: I0318 13:23:51.342145 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 13:23:51.348594 master-0 kubenswrapper[28504]: I0318 13:23:51.348552 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:23:51.362013 master-0 kubenswrapper[28504]: I0318 13:23:51.361781 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-wbjcs" Mar 18 13:23:51.381701 master-0 kubenswrapper[28504]: I0318 13:23:51.381645 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 13:23:51.389289 master-0 kubenswrapper[28504]: I0318 13:23:51.389227 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c0d0048-6d96-459c-8742-2f092af44a6a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:23:51.401984 master-0 kubenswrapper[28504]: I0318 13:23:51.401799 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 13:23:51.409502 master-0 kubenswrapper[28504]: I0318 13:23:51.409456 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a715e53-1874-4993-93d1-504c3470a6f5-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:23:51.421689 master-0 kubenswrapper[28504]: I0318 13:23:51.421646 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 13:23:51.429034 master-0 kubenswrapper[28504]: I0318 13:23:51.428987 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-metrics-server-audit-profiles\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:51.442424 master-0 kubenswrapper[28504]: I0318 13:23:51.442372 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 13:23:51.448898 master-0 kubenswrapper[28504]: I0318 13:23:51.448844 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-server-tls\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:51.461814 master-0 kubenswrapper[28504]: I0318 13:23:51.461756 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 13:23:51.469537 master-0 kubenswrapper[28504]: I0318 13:23:51.469496 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-client-certs\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:51.482733 master-0 kubenswrapper[28504]: I0318 13:23:51.482669 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 13:23:51.489383 master-0 kubenswrapper[28504]: I0318 13:23:51.489320 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:51.502701 master-0 kubenswrapper[28504]: I0318 13:23:51.502649 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2dpn1smcfbjnb" Mar 18 13:23:51.509486 master-0 kubenswrapper[28504]: I0318 13:23:51.509418 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:51.521402 master-0 kubenswrapper[28504]: I0318 13:23:51.521268 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-jlgxc" Mar 18 13:23:51.541625 master-0 kubenswrapper[28504]: I0318 13:23:51.541571 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 13:23:51.548061 master-0 kubenswrapper[28504]: I0318 13:23:51.548011 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:51.563006 master-0 kubenswrapper[28504]: I0318 13:23:51.562918 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-x4c9n" Mar 18 13:23:51.568029 master-0 kubenswrapper[28504]: E0318 13:23:51.567978 28504 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:51.568118 master-0 kubenswrapper[28504]: E0318 13:23:51.568067 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-custom-resource-state-configmap podName:6ed4f640-d481-4e7a-92eb-f0eda82e138c nodeName:}" failed. No retries permitted until 2026-03-18 13:23:52.568045677 +0000 UTC m=+10.062851452 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7bbc969446-dldw9" (UID: "6ed4f640-d481-4e7a-92eb-f0eda82e138c") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:23:51.568265 master-0 kubenswrapper[28504]: E0318 13:23:51.568181 28504 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:51.568314 master-0 kubenswrapper[28504]: E0318 13:23:51.568271 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-webhook-certs podName:bc9af4af-fb39-4a51-83ae-dab3f1d159f2 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:52.568254173 +0000 UTC m=+10.063059948 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-webhook-certs") pod "multus-admission-controller-58c9f8fc64-bnrjt" (UID: "bc9af4af-fb39-4a51-83ae-dab3f1d159f2") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:51.569060 master-0 kubenswrapper[28504]: E0318 13:23:51.569031 28504 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:51.569111 master-0 kubenswrapper[28504]: E0318 13:23:51.569081 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-kube-rbac-proxy-config podName:6ed4f640-d481-4e7a-92eb-f0eda82e138c nodeName:}" failed. No retries permitted until 2026-03-18 13:23:52.569071236 +0000 UTC m=+10.063877001 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7bbc969446-dldw9" (UID: "6ed4f640-d481-4e7a-92eb-f0eda82e138c") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:23:51.583964 master-0 kubenswrapper[28504]: I0318 13:23:51.583876 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 13:23:51.601610 master-0 kubenswrapper[28504]: I0318 13:23:51.601549 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 13:23:51.621183 master-0 kubenswrapper[28504]: I0318 13:23:51.621146 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 13:23:51.641844 master-0 kubenswrapper[28504]: I0318 13:23:51.641771 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-29gmv" Mar 18 13:23:51.780585 master-0 kubenswrapper[28504]: I0318 13:23:51.780442 28504 request.go:700] Waited for 2.922778051s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/metrics-daemon-sa/token Mar 18 13:23:51.937988 master-0 kubenswrapper[28504]: I0318 13:23:51.937948 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4lx2\" (UniqueName: \"kubernetes.io/projected/4086d06f-d50e-4632-9da7-508909429eef-kube-api-access-w4lx2\") pod \"multus-9bhww\" (UID: \"4086d06f-d50e-4632-9da7-508909429eef\") " pod="openshift-multus/multus-9bhww" Mar 18 13:23:52.002307 master-0 kubenswrapper[28504]: I0318 13:23:52.002182 28504 kubelet_pods.go:1320] "Clean up containers for orphaned pod we had not seen before" podUID="49fac1b46a11e49501805e891baae4a9" killPodOptions="" Mar 18 13:23:52.002836 master-0 kubenswrapper[28504]: E0318 13:23:52.002801 28504 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.185s" Mar 18 13:23:52.002925 master-0 kubenswrapper[28504]: I0318 13:23:52.002833 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" event={"ID":"f2b92a53-0b61-4e1d-a306-f9a498e48b38","Type":"ContainerDied","Data":"0f1b7521916bb1f15f4a8946c701639d4de35a4fc8e0cbdc319661e84db6acb6"} Mar 18 13:23:52.002925 master-0 kubenswrapper[28504]: I0318 13:23:52.002903 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:52.003022 master-0 kubenswrapper[28504]: I0318 13:23:52.002923 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 13:23:52.003022 master-0 kubenswrapper[28504]: I0318 13:23:52.002981 28504 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="bd99bb9c-615b-4ddc-8849-489954612633" Mar 18 13:23:52.003022 master-0 kubenswrapper[28504]: I0318 13:23:52.003000 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"72a069687b3c9ef62bab13abc5527b84fd02758e1d6cc5b1b45a4026fa01b58a"} Mar 18 13:23:52.003022 master-0 kubenswrapper[28504]: I0318 13:23:52.003018 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 13:23:52.003163 master-0 kubenswrapper[28504]: I0318 13:23:52.003030 28504 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="bd99bb9c-615b-4ddc-8849-489954612633" Mar 18 13:23:52.003163 master-0 kubenswrapper[28504]: I0318 13:23:52.003043 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 13:23:52.003163 master-0 kubenswrapper[28504]: I0318 13:23:52.003052 28504 scope.go:117] "RemoveContainer" containerID="5b4c84f643c308b4da498c0b191698ddaa3218b818a04462557cc1d1c093013c" Mar 18 13:23:52.003163 master-0 kubenswrapper[28504]: I0318 13:23:52.003074 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:52.020186 master-0 kubenswrapper[28504]: I0318 13:23:52.019995 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcfq7\" (UniqueName: \"kubernetes.io/projected/7e309570-09d0-412a-a74b-c5397d048a30-kube-api-access-mcfq7\") pod \"cluster-samples-operator-85f7577d78-jjdvw\" (UID: \"7e309570-09d0-412a-a74b-c5397d048a30\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-jjdvw" Mar 18 13:23:52.020186 master-0 kubenswrapper[28504]: I0318 13:23:52.020057 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49fac1b46a11e49501805e891baae4a9" path="/var/lib/kubelet/pods/49fac1b46a11e49501805e891baae4a9/volumes" Mar 18 13:23:52.020664 master-0 kubenswrapper[28504]: I0318 13:23:52.020631 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:52.020725 master-0 kubenswrapper[28504]: I0318 13:23:52.020679 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 13:23:52.020725 master-0 kubenswrapper[28504]: I0318 13:23:52.020699 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"2dbddceb4f62a187377fac19cce981e7d88a91f87a33640abf8fa581a60868d7"} Mar 18 13:23:52.020780 master-0 kubenswrapper[28504]: I0318 13:23:52.020738 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm" Mar 18 13:23:52.020780 master-0 kubenswrapper[28504]: I0318 13:23:52.020762 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-8qhwm" Mar 18 13:23:52.020780 master-0 kubenswrapper[28504]: I0318 13:23:52.020772 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:23:52.020867 master-0 kubenswrapper[28504]: I0318 13:23:52.020781 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"f8b0391a9dd6a8a76a315386f50081873095d6505ee1824ca4cf57436b5940a3"} Mar 18 13:23:52.020867 master-0 kubenswrapper[28504]: I0318 13:23:52.020792 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:23:52.144188 master-0 kubenswrapper[28504]: I0318 13:23:52.139845 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9tzl\" (UniqueName: \"kubernetes.io/projected/c9a9baa5-9334-47dc-8d0c-eafc96a679b3-kube-api-access-z9tzl\") pod \"openshift-controller-manager-operator-8c94f4649-4qs2l\" (UID: \"c9a9baa5-9334-47dc-8d0c-eafc96a679b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-4qs2l" Mar 18 13:23:52.144188 master-0 kubenswrapper[28504]: I0318 13:23:52.141246 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmmhd\" (UniqueName: \"kubernetes.io/projected/3a039fc2-b0af-4b2c-a884-1c274c08064d-kube-api-access-pmmhd\") pod \"service-ca-79bc6b8d76-855bx\" (UID: \"3a039fc2-b0af-4b2c-a884-1c274c08064d\") " pod="openshift-service-ca/service-ca-79bc6b8d76-855bx" Mar 18 13:23:52.144188 master-0 kubenswrapper[28504]: I0318 13:23:52.142388 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5475b\" (UniqueName: \"kubernetes.io/projected/bd033b5b-af07-4e69-9a5c-46f7c9bde95a-kube-api-access-5475b\") pod \"cluster-autoscaler-operator-866dc4744-q8vxr\" (UID: \"bd033b5b-af07-4e69-9a5c-46f7c9bde95a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-q8vxr" Mar 18 13:23:52.144188 master-0 kubenswrapper[28504]: I0318 13:23:52.142804 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hthf8\" (UniqueName: \"kubernetes.io/projected/f3c106be-27ea-4849-b365-eff6d25f5e71-kube-api-access-hthf8\") pod \"machine-config-daemon-2qjl7\" (UID: \"f3c106be-27ea-4849-b365-eff6d25f5e71\") " pod="openshift-machine-config-operator/machine-config-daemon-2qjl7" Mar 18 13:23:52.144188 master-0 kubenswrapper[28504]: I0318 13:23:52.143160 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rccw\" (UniqueName: \"kubernetes.io/projected/2e0fa133-60e7-47d0-996e-7e85aef2a218-kube-api-access-7rccw\") pod \"redhat-marketplace-p546b\" (UID: \"2e0fa133-60e7-47d0-996e-7e85aef2a218\") " pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:23:52.150953 master-0 kubenswrapper[28504]: I0318 13:23:52.149340 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpl28\" (UniqueName: \"kubernetes.io/projected/5e691486-8540-4b79-8eed-b0fb829071db-kube-api-access-lpl28\") pod \"network-metrics-daemon-kq2j4\" (UID: \"5e691486-8540-4b79-8eed-b0fb829071db\") " pod="openshift-multus/network-metrics-daemon-kq2j4" Mar 18 13:23:52.150953 master-0 kubenswrapper[28504]: I0318 13:23:52.149421 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvxs4\" (UniqueName: \"kubernetes.io/projected/ee1eb80b-5a76-443f-a534-54d5bdc0c98a-kube-api-access-qvxs4\") pod \"cluster-monitoring-operator-58845fbb57-jfdn5\" (UID: \"ee1eb80b-5a76-443f-a534-54d5bdc0c98a\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-jfdn5" Mar 18 13:23:52.150953 master-0 kubenswrapper[28504]: I0318 13:23:52.149542 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkdqs\" (UniqueName: \"kubernetes.io/projected/36db10b8-33a2-4b54-85e2-9809eb6bc37d-kube-api-access-bkdqs\") pod \"package-server-manager-7b95f86987-kbpvr\" (UID: \"36db10b8-33a2-4b54-85e2-9809eb6bc37d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:23:52.151363 master-0 kubenswrapper[28504]: I0318 13:23:52.151334 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d27hr\" (UniqueName: \"kubernetes.io/projected/2385db6b-4286-4839-822c-aa9c52290172-kube-api-access-d27hr\") pod \"machine-config-operator-84d549f6d5-6qlqd\" (UID: \"2385db6b-4286-4839-822c-aa9c52290172\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-6qlqd" Mar 18 13:23:52.152448 master-0 kubenswrapper[28504]: I0318 13:23:52.152422 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mlkj\" (UniqueName: \"kubernetes.io/projected/1bf0ea4e-8b08-488f-b252-39580f46b756-kube-api-access-4mlkj\") pod \"etcd-operator-8544cbcf9c-hmbpl\" (UID: \"1bf0ea4e-8b08-488f-b252-39580f46b756\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-hmbpl" Mar 18 13:23:52.153152 master-0 kubenswrapper[28504]: I0318 13:23:52.153120 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw64j\" (UniqueName: \"kubernetes.io/projected/1ad580a2-7f58-4d66-adad-0a53d9777655-kube-api-access-cw64j\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf\" (UID: \"1ad580a2-7f58-4d66-adad-0a53d9777655\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-5qndf" Mar 18 13:23:52.153313 master-0 kubenswrapper[28504]: I0318 13:23:52.153283 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-882b8\" (UniqueName: \"kubernetes.io/projected/8a0944d2-d99a-42eb-81f5-a212b750b8f4-kube-api-access-882b8\") pod \"network-operator-7bd846bfc4-mk4d5\" (UID: \"8a0944d2-d99a-42eb-81f5-a212b750b8f4\") " pod="openshift-network-operator/network-operator-7bd846bfc4-mk4d5" Mar 18 13:23:52.154098 master-0 kubenswrapper[28504]: I0318 13:23:52.154075 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w454\" (UniqueName: \"kubernetes.io/projected/933a37fd-d76a-4f60-8dd8-301fb73daf42-kube-api-access-5w454\") pod \"control-plane-machine-set-operator-6f97756bc8-bjpp5\" (UID: \"933a37fd-d76a-4f60-8dd8-301fb73daf42\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-bjpp5" Mar 18 13:23:52.154913 master-0 kubenswrapper[28504]: I0318 13:23:52.154886 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2f2982b-2117-4c16-a4e3-f7e14c7ddc41-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-nqtlk\" (UID: \"e2f2982b-2117-4c16-a4e3-f7e14c7ddc41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-nqtlk" Mar 18 13:23:52.155990 master-0 kubenswrapper[28504]: I0318 13:23:52.155953 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8v5n\" (UniqueName: \"kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-kube-api-access-h8v5n\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:23:52.157511 master-0 kubenswrapper[28504]: I0318 13:23:52.157471 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nllws\" (UniqueName: \"kubernetes.io/projected/317a89ea-e9dd-4167-8568-bb36e2431015-kube-api-access-nllws\") pod \"community-operators-nhwvw\" (UID: \"317a89ea-e9dd-4167-8568-bb36e2431015\") " pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:23:52.160019 master-0 kubenswrapper[28504]: I0318 13:23:52.159986 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6d7j\" (UniqueName: \"kubernetes.io/projected/35d8f08f-4c57-44e0-8e8f-3969287e2a5a-kube-api-access-q6d7j\") pod \"redhat-operators-459lq\" (UID: \"35d8f08f-4c57-44e0-8e8f-3969287e2a5a\") " pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:23:52.176108 master-0 kubenswrapper[28504]: I0318 13:23:52.176043 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwsns\" (UniqueName: \"kubernetes.io/projected/46ae7b31-c91c-477e-a04a-a3a8541747be-kube-api-access-zwsns\") pod \"multus-additional-cni-plugins-xpppb\" (UID: \"46ae7b31-c91c-477e-a04a-a3a8541747be\") " pod="openshift-multus/multus-additional-cni-plugins-xpppb" Mar 18 13:23:52.178229 master-0 kubenswrapper[28504]: I0318 13:23:52.178195 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vljm6\" (UniqueName: \"kubernetes.io/projected/d2cf9274-25d2-4576-bbef-1d416dfff0a9-kube-api-access-vljm6\") pod \"certified-operators-d7pj2\" (UID: \"d2cf9274-25d2-4576-bbef-1d416dfff0a9\") " pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:23:52.178853 master-0 kubenswrapper[28504]: I0318 13:23:52.178824 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crbvx\" (UniqueName: \"kubernetes.io/projected/369e9689-e2f6-4276-b096-8db094f8d6ae-kube-api-access-crbvx\") pod \"cluster-node-tuning-operator-598fbc5f8f-p6tvz\" (UID: \"369e9689-e2f6-4276-b096-8db094f8d6ae\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-p6tvz" Mar 18 13:23:52.179435 master-0 kubenswrapper[28504]: I0318 13:23:52.179402 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b6rn\" (UniqueName: \"kubernetes.io/projected/5bccf60c-5b07-4f40-8430-12bfb62661c7-kube-api-access-4b6rn\") pod \"csi-snapshot-controller-operator-5f5d689c6b-4s6b8\" (UID: \"5bccf60c-5b07-4f40-8430-12bfb62661c7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-4s6b8" Mar 18 13:23:52.184417 master-0 kubenswrapper[28504]: I0318 13:23:52.184374 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hb2q\" (UniqueName: \"kubernetes.io/projected/83a4f641-d28f-42aa-a228-f6086d720fe4-kube-api-access-9hb2q\") pod \"service-ca-operator-b865698dc-7t5g5\" (UID: \"83a4f641-d28f-42aa-a228-f6086d720fe4\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-7t5g5" Mar 18 13:23:52.184607 master-0 kubenswrapper[28504]: I0318 13:23:52.184584 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gv8b\" (UniqueName: \"kubernetes.io/projected/16a930da-d793-486f-bcef-cf042d3c427d-kube-api-access-5gv8b\") pod \"cluster-olm-operator-67dcd4998-cwpkz\" (UID: \"16a930da-d793-486f-bcef-cf042d3c427d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-cwpkz" Mar 18 13:23:52.197684 master-0 kubenswrapper[28504]: I0318 13:23:52.197638 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq596\" (UniqueName: \"kubernetes.io/projected/734f9f10-5bde-44d5-a831-021b93fd667d-kube-api-access-mq596\") pod \"machine-approver-5c6485487f-f8zc2\" (UID: \"734f9f10-5bde-44d5-a831-021b93fd667d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-f8zc2" Mar 18 13:23:52.211861 master-0 kubenswrapper[28504]: I0318 13:23:52.211796 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqxgz\" (UniqueName: \"kubernetes.io/projected/ebe459df-4be3-4a73-a061-5d2c637f57be-kube-api-access-fqxgz\") pod \"network-check-source-b4bf74f6-qnwtb\" (UID: \"ebe459df-4be3-4a73-a061-5d2c637f57be\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-qnwtb" Mar 18 13:23:52.242514 master-0 kubenswrapper[28504]: I0318 13:23:52.242444 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhqk9\" (UniqueName: \"kubernetes.io/projected/da6a763d-2777-40c4-ae1f-c77ced406ea2-kube-api-access-lhqk9\") pod \"dns-operator-9c5679d8f-bqbzx\" (UID: \"da6a763d-2777-40c4-ae1f-c77ced406ea2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-bqbzx" Mar 18 13:23:52.261382 master-0 kubenswrapper[28504]: I0318 13:23:52.261326 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2c4572e-0b38-4db1-96e5-6a35e29048e7-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-5zbrg\" (UID: \"c2c4572e-0b38-4db1-96e5-6a35e29048e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-5zbrg" Mar 18 13:23:52.305534 master-0 kubenswrapper[28504]: I0318 13:23:52.303314 28504 scope.go:117] "RemoveContainer" containerID="a907a02503b5df781613b6da0961b359781cced0221882a7b1a1568fee1b84fe" Mar 18 13:23:52.306443 master-0 kubenswrapper[28504]: I0318 13:23:52.306392 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-check-endpoints/0.log" Mar 18 13:23:52.308178 master-0 kubenswrapper[28504]: I0318 13:23:52.308142 28504 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="f8b0391a9dd6a8a76a315386f50081873095d6505ee1824ca4cf57436b5940a3" exitCode=255 Mar 18 13:23:52.308255 master-0 kubenswrapper[28504]: I0318 13:23:52.308217 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerDied","Data":"f8b0391a9dd6a8a76a315386f50081873095d6505ee1824ca4cf57436b5940a3"} Mar 18 13:23:52.313026 master-0 kubenswrapper[28504]: I0318 13:23:52.310578 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/5.log" Mar 18 13:23:52.318227 master-0 kubenswrapper[28504]: I0318 13:23:52.318026 28504 scope.go:117] "RemoveContainer" containerID="7ddc54cddedd2bdae32224357d62187da26cebbd3a01e7a295c7e87fef85c020" Mar 18 13:23:52.336167 master-0 kubenswrapper[28504]: I0318 13:23:52.336113 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b29z\" (UniqueName: \"kubernetes.io/projected/9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a-kube-api-access-7b29z\") pod \"apiserver-7d95bbc4f4-4ch22\" (UID: \"9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a\") " pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:52.336687 master-0 kubenswrapper[28504]: I0318 13:23:52.336653 28504 scope.go:117] "RemoveContainer" containerID="dceb07db18c0d8faeb0249820c09e2ecee50c97d0f9fd01d9a209e9a350fd96e" Mar 18 13:23:52.337896 master-0 kubenswrapper[28504]: I0318 13:23:52.337858 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93ea3c78-dede-468f-89a5-551133f794c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-4bfbf\" (UID: \"93ea3c78-dede-468f-89a5-551133f794c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-4bfbf" Mar 18 13:23:52.338196 master-0 kubenswrapper[28504]: I0318 13:23:52.338159 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcm8d\" (UniqueName: \"kubernetes.io/projected/0213214b-693b-411b-8254-48d7826011eb-kube-api-access-xcm8d\") pod \"openshift-config-operator-95bf4f4d-c7nh9\" (UID: \"0213214b-693b-411b-8254-48d7826011eb\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:23:52.338768 master-0 kubenswrapper[28504]: I0318 13:23:52.338567 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brvlj\" (UniqueName: \"kubernetes.io/projected/4bc77989-ecfc-4500-92a0-18c2b3b78408-kube-api-access-brvlj\") pod \"ovnkube-control-plane-57f769d897-9mk42\" (UID: \"4bc77989-ecfc-4500-92a0-18c2b3b78408\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-9mk42" Mar 18 13:23:52.343781 master-0 kubenswrapper[28504]: I0318 13:23:52.343524 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:23:52.347051 master-0 kubenswrapper[28504]: I0318 13:23:52.347023 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-c7nh9" Mar 18 13:23:52.364589 master-0 kubenswrapper[28504]: I0318 13:23:52.364521 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-bound-sa-token\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:23:52.382161 master-0 kubenswrapper[28504]: I0318 13:23:52.382052 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8dfw\" (UniqueName: \"kubernetes.io/projected/8ce8e99d-7b02-4bf4-a438-adde851918cb-kube-api-access-r8dfw\") pod \"authentication-operator-5885bfd7f4-mqh5c\" (UID: \"8ce8e99d-7b02-4bf4-a438-adde851918cb\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-mqh5c" Mar 18 13:23:52.465664 master-0 kubenswrapper[28504]: I0318 13:23:52.465591 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5mgr\" (UniqueName: \"kubernetes.io/projected/f2b92a53-0b61-4e1d-a306-f9a498e48b38-kube-api-access-j5mgr\") pod \"ingress-operator-66b84d69b-xwqsb\" (UID: \"f2b92a53-0b61-4e1d-a306-f9a498e48b38\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" Mar 18 13:23:52.466917 master-0 kubenswrapper[28504]: I0318 13:23:52.466759 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbztv\" (UniqueName: \"kubernetes.io/projected/c074751c-6b3c-44df-aca5-42fa69662779-kube-api-access-bbztv\") pod \"insights-operator-68bf6ff9d6-ckwz8\" (UID: \"c074751c-6b3c-44df-aca5-42fa69662779\") " pod="openshift-insights/insights-operator-68bf6ff9d6-ckwz8" Mar 18 13:23:52.471164 master-0 kubenswrapper[28504]: I0318 13:23:52.471112 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghzrb\" (UniqueName: \"kubernetes.io/projected/47f82c03-65d1-4a6c-ba09-8a00ae778009-kube-api-access-ghzrb\") pod \"catalog-operator-68f85b4d6c-p9k56\" (UID: \"47f82c03-65d1-4a6c-ba09-8a00ae778009\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:23:52.472103 master-0 kubenswrapper[28504]: I0318 13:23:52.472050 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzldt\" (UniqueName: \"kubernetes.io/projected/1ad93612-ab12-4b30-984f-119e1b924a84-kube-api-access-xzldt\") pod \"csi-snapshot-controller-64854d9cff-wkw7f\" (UID: \"1ad93612-ab12-4b30-984f-119e1b924a84\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-wkw7f" Mar 18 13:23:52.474088 master-0 kubenswrapper[28504]: I0318 13:23:52.474047 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlm4c\" (UniqueName: \"kubernetes.io/projected/baeb6380-95e4-4e10-9798-e1e22f20bade-kube-api-access-xlm4c\") pod \"operator-controller-controller-manager-57777556ff-4r95z\" (UID: \"baeb6380-95e4-4e10-9798-e1e22f20bade\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:23:52.501160 master-0 kubenswrapper[28504]: I0318 13:23:52.501053 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlbm6\" (UniqueName: \"kubernetes.io/projected/b41c9132-92ef-429d-bdd5-9bdb024e04fc-kube-api-access-wlbm6\") pod \"apiserver-574f6d5bf6-8krhk\" (UID: \"b41c9132-92ef-429d-bdd5-9bdb024e04fc\") " pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:52.521297 master-0 kubenswrapper[28504]: I0318 13:23:52.521232 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6bfw\" (UniqueName: \"kubernetes.io/projected/ab9ef7c0-f9f2-4048-9857-06ab48f36ecf-kube-api-access-w6bfw\") pod \"router-default-7dcf5569b5-mtnzv\" (UID: \"ab9ef7c0-f9f2-4048-9857-06ab48f36ecf\") " pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:23:52.573958 master-0 kubenswrapper[28504]: I0318 13:23:52.573816 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sqzx\" (UniqueName: \"kubernetes.io/projected/330df925-8429-4b96-9bfe-caa017c21afa-kube-api-access-2sqzx\") pod \"marketplace-operator-89ccd998f-4v84b\" (UID: \"330df925-8429-4b96-9bfe-caa017c21afa\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:23:52.575908 master-0 kubenswrapper[28504]: I0318 13:23:52.575872 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbv4l\" (UniqueName: \"kubernetes.io/projected/02879f34-7062-4f07-9a5a-f965103d9182-kube-api-access-jbv4l\") pod \"machine-config-server-4f5s4\" (UID: \"02879f34-7062-4f07-9a5a-f965103d9182\") " pod="openshift-machine-config-operator/machine-config-server-4f5s4" Mar 18 13:23:52.576035 master-0 kubenswrapper[28504]: I0318 13:23:52.576001 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f8xk\" (UniqueName: \"kubernetes.io/projected/cb471665-2b07-48df-9881-3fb663390b23-kube-api-access-6f8xk\") pod \"openshift-apiserver-operator-d65958b8-lwfvl\" (UID: \"cb471665-2b07-48df-9881-3fb663390b23\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-lwfvl" Mar 18 13:23:52.611541 master-0 kubenswrapper[28504]: I0318 13:23:52.611462 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:52.611774 master-0 kubenswrapper[28504]: I0318 13:23:52.611559 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-bnrjt\" (UID: \"bc9af4af-fb39-4a51-83ae-dab3f1d159f2\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" Mar 18 13:23:52.611882 master-0 kubenswrapper[28504]: I0318 13:23:52.611840 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:52.611917 master-0 kubenswrapper[28504]: I0318 13:23:52.611899 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:52.612017 master-0 kubenswrapper[28504]: I0318 13:23:52.611974 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-bnrjt\" (UID: \"bc9af4af-fb39-4a51-83ae-dab3f1d159f2\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" Mar 18 13:23:52.612191 master-0 kubenswrapper[28504]: I0318 13:23:52.612156 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:52.635334 master-0 kubenswrapper[28504]: I0318 13:23:52.635275 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:52.635562 master-0 kubenswrapper[28504]: I0318 13:23:52.635363 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:52.640012 master-0 kubenswrapper[28504]: I0318 13:23:52.639952 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:23:52.653270 master-0 kubenswrapper[28504]: I0318 13:23:52.653176 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:52.653476 master-0 kubenswrapper[28504]: I0318 13:23:52.653330 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:52.666549 master-0 kubenswrapper[28504]: I0318 13:23:52.666483 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7jz6\" (UniqueName: \"kubernetes.io/projected/4671673d-afa0-481f-b3a2-2c2b9441b6ce-kube-api-access-d7jz6\") pod \"dns-default-wl929\" (UID: \"4671673d-afa0-481f-b3a2-2c2b9441b6ce\") " pod="openshift-dns/dns-default-wl929" Mar 18 13:23:52.666833 master-0 kubenswrapper[28504]: I0318 13:23:52.666776 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9jhr\" (UniqueName: \"kubernetes.io/projected/7fa6920b-f7d9-4758-bba9-356a2c8b1b67-kube-api-access-w9jhr\") pod \"cloud-credential-operator-744f9dbf77-9nw6w\" (UID: \"7fa6920b-f7d9-4758-bba9-356a2c8b1b67\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9nw6w" Mar 18 13:23:52.667004 master-0 kubenswrapper[28504]: I0318 13:23:52.666972 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4dcj\" (UniqueName: \"kubernetes.io/projected/375d5112-d2be-47cf-bee1-82614ba71ed8-kube-api-access-d4dcj\") pod \"packageserver-5dccbdd8cc-pw7vm\" (UID: \"375d5112-d2be-47cf-bee1-82614ba71ed8\") " pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:23:52.667899 master-0 kubenswrapper[28504]: I0318 13:23:52.667868 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzblt\" (UniqueName: \"kubernetes.io/projected/35925474-e3fe-4cff-aad6-d853816618c7-kube-api-access-dzblt\") pod \"olm-operator-5c9796789-8r4hr\" (UID: \"35925474-e3fe-4cff-aad6-d853816618c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:23:52.715537 master-0 kubenswrapper[28504]: I0318 13:23:52.715489 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mthwt\" (UniqueName: \"kubernetes.io/projected/a5a93d05-3c8e-4666-9a55-d8f9e902db78-kube-api-access-mthwt\") pod \"controller-manager-66b7876dbc-rdzrh\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:23:52.715721 master-0 kubenswrapper[28504]: I0318 13:23:52.715621 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbksj\" (UniqueName: \"kubernetes.io/projected/9ca94153-9d1a-4b0a-a3eb-556e85f2e875-kube-api-access-hbksj\") pod \"migrator-8487694857-vf6mv\" (UID: \"9ca94153-9d1a-4b0a-a3eb-556e85f2e875\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-vf6mv" Mar 18 13:23:52.721495 master-0 kubenswrapper[28504]: I0318 13:23:52.721458 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4d0b174-33e4-46ee-863b-b5cc2a271b85-kube-api-access\") pod \"cluster-version-operator-7d58488df-2bmkn\" (UID: \"e4d0b174-33e4-46ee-863b-b5cc2a271b85\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-2bmkn" Mar 18 13:23:52.736779 master-0 kubenswrapper[28504]: I0318 13:23:52.736710 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73c93ee3-cf14-4fea-b2a7-ccfb56e55be4-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-n995f\" (UID: \"73c93ee3-cf14-4fea-b2a7-ccfb56e55be4\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-n995f" Mar 18 13:23:52.795963 master-0 kubenswrapper[28504]: I0318 13:23:52.795898 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4djxt\" (UniqueName: \"kubernetes.io/projected/d3f208f9-e2e1-4fae-a47a-f58b722e0ad5-kube-api-access-4djxt\") pod \"cluster-cloud-controller-manager-operator-7dff898856-ncjbh\" (UID: \"d3f208f9-e2e1-4fae-a47a-f58b722e0ad5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-ncjbh" Mar 18 13:23:52.796473 master-0 kubenswrapper[28504]: I0318 13:23:52.796435 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp5xj\" (UniqueName: \"kubernetes.io/projected/234a5a6c-3790-49d0-b1e7-86f81048d96a-kube-api-access-pp5xj\") pod \"catalogd-controller-manager-6864dc98f7-8jrfz\" (UID: \"234a5a6c-3790-49d0-b1e7-86f81048d96a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:23:52.799927 master-0 kubenswrapper[28504]: I0318 13:23:52.799868 28504 request.go:700] Waited for 3.933920591s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/serviceaccounts/network-node-identity/token Mar 18 13:23:52.801677 master-0 kubenswrapper[28504]: I0318 13:23:52.801628 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6sr4\" (UniqueName: \"kubernetes.io/projected/17adbc1a-f29c-4278-b29a-0cc3879b753f-kube-api-access-v6sr4\") pod \"machine-config-controller-b4f87c5b9-qpp2s\" (UID: \"17adbc1a-f29c-4278-b29a-0cc3879b753f\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-qpp2s" Mar 18 13:23:52.814604 master-0 kubenswrapper[28504]: I0318 13:23:52.814551 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k254v\" (UniqueName: \"kubernetes.io/projected/eb8907fd-35dd-452a-8032-f2f95a6e553a-kube-api-access-k254v\") pod \"network-node-identity-xcbtb\" (UID: \"eb8907fd-35dd-452a-8032-f2f95a6e553a\") " pod="openshift-network-node-identity/network-node-identity-xcbtb" Mar 18 13:23:52.833722 master-0 kubenswrapper[28504]: I0318 13:23:52.833619 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kpz5\" (UniqueName: \"kubernetes.io/projected/8f59a12b-d690-44c5-972c-fb4b0b5819f1-kube-api-access-8kpz5\") pod \"node-resolver-slqms\" (UID: \"8f59a12b-d690-44c5-972c-fb4b0b5819f1\") " pod="openshift-dns/node-resolver-slqms" Mar 18 13:23:52.854576 master-0 kubenswrapper[28504]: I0318 13:23:52.854520 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv9m7\" (UniqueName: \"kubernetes.io/projected/2cad2401-dab1-49f7-870e-a742ebfe323f-kube-api-access-rv9m7\") pod \"network-check-target-zlgkc\" (UID: \"2cad2401-dab1-49f7-870e-a742ebfe323f\") " pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:23:52.872408 master-0 kubenswrapper[28504]: I0318 13:23:52.872360 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dvd5\" (UniqueName: \"kubernetes.io/projected/d2e2ef3a-a6e9-44dc-93c7-9f533e75502a-kube-api-access-5dvd5\") pod \"machine-api-operator-6fbb6cf6f9-nf22v\" (UID: \"d2e2ef3a-a6e9-44dc-93c7-9f533e75502a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-nf22v" Mar 18 13:23:52.895345 master-0 kubenswrapper[28504]: I0318 13:23:52.895284 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc69w\" (UniqueName: \"kubernetes.io/projected/a01c92f5-7938-437d-8262-11598bd8023c-kube-api-access-qc69w\") pod \"cluster-baremetal-operator-6f69995874-7w5g8\" (UID: \"a01c92f5-7938-437d-8262-11598bd8023c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-7w5g8" Mar 18 13:23:52.895844 master-0 kubenswrapper[28504]: I0318 13:23:52.895807 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:23:52.900665 master-0 kubenswrapper[28504]: I0318 13:23:52.900622 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-p9k56" Mar 18 13:23:52.915098 master-0 kubenswrapper[28504]: I0318 13:23:52.915051 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxn4v\" (UniqueName: \"kubernetes.io/projected/7a951627-c032-4846-821c-c4bcbf4a91b9-kube-api-access-wxn4v\") pod \"cluster-storage-operator-7d87854d6-92zqc\" (UID: \"7a951627-c032-4846-821c-c4bcbf4a91b9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-92zqc" Mar 18 13:23:52.929008 master-0 kubenswrapper[28504]: I0318 13:23:52.928912 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:23:52.929289 master-0 kubenswrapper[28504]: I0318 13:23:52.929241 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wl929" Mar 18 13:23:52.931087 master-0 kubenswrapper[28504]: I0318 13:23:52.931048 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wl929" Mar 18 13:23:52.931192 master-0 kubenswrapper[28504]: I0318 13:23:52.931161 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-zlgkc" Mar 18 13:23:52.933569 master-0 kubenswrapper[28504]: I0318 13:23:52.933523 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6b9b\" (UniqueName: \"kubernetes.io/projected/0f16e797-a619-46a8-948a-9fdfc8a9891f-kube-api-access-q6b9b\") pod \"tuned-rlp78\" (UID: \"0f16e797-a619-46a8-948a-9fdfc8a9891f\") " pod="openshift-cluster-node-tuning-operator/tuned-rlp78" Mar 18 13:23:52.945973 master-0 kubenswrapper[28504]: I0318 13:23:52.945898 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:23:52.946176 master-0 kubenswrapper[28504]: I0318 13:23:52.946091 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:23:52.948134 master-0 kubenswrapper[28504]: I0318 13:23:52.948089 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8jrfz" Mar 18 13:23:52.950711 master-0 kubenswrapper[28504]: I0318 13:23:52.950683 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-8r4hr" Mar 18 13:23:52.953433 master-0 kubenswrapper[28504]: I0318 13:23:52.953377 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:23:52.955428 master-0 kubenswrapper[28504]: I0318 13:23:52.955380 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:23:52.956890 master-0 kubenswrapper[28504]: I0318 13:23:52.956834 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhzcj\" (UniqueName: \"kubernetes.io/projected/65cfa12a-0711-4fba-8859-73a3f8f250a9-kube-api-access-xhzcj\") pod \"route-controller-manager-597f7b4fd-fgxxq\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:23:52.957125 master-0 kubenswrapper[28504]: I0318 13:23:52.957068 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-5dccbdd8cc-pw7vm" Mar 18 13:23:52.960981 master-0 kubenswrapper[28504]: I0318 13:23:52.960479 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:23:52.972995 master-0 kubenswrapper[28504]: I0318 13:23:52.972867 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwfnk\" (UniqueName: \"kubernetes.io/projected/3c24b6e2-965b-4b4f-ad65-ded7b3cc3971-kube-api-access-qwfnk\") pod \"iptables-alerter-tvnss\" (UID: \"3c24b6e2-965b-4b4f-ad65-ded7b3cc3971\") " pod="openshift-network-operator/iptables-alerter-tvnss" Mar 18 13:23:52.994184 master-0 kubenswrapper[28504]: I0318 13:23:52.994103 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnvfd\" (UniqueName: \"kubernetes.io/projected/20dc979a-732b-43b5-acc2-118e4c350470-kube-api-access-wnvfd\") pod \"ovnkube-node-pfs29\" (UID: \"20dc979a-732b-43b5-acc2-118e4c350470\") " pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:53.014070 master-0 kubenswrapper[28504]: I0318 13:23:53.014006 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twczm\" (UniqueName: \"kubernetes.io/projected/bc9af4af-fb39-4a51-83ae-dab3f1d159f2-kube-api-access-twczm\") pod \"multus-admission-controller-58c9f8fc64-bnrjt\" (UID: \"bc9af4af-fb39-4a51-83ae-dab3f1d159f2\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-bnrjt" Mar 18 13:23:53.033967 master-0 kubenswrapper[28504]: I0318 13:23:53.033885 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99mks\" (UniqueName: \"kubernetes.io/projected/5a715e53-1874-4993-93d1-504c3470a6f5-kube-api-access-99mks\") pod \"prometheus-operator-6c8df6d4b-6twz2\" (UID: \"5a715e53-1874-4993-93d1-504c3470a6f5\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-6twz2" Mar 18 13:23:53.053808 master-0 kubenswrapper[28504]: I0318 13:23:53.053750 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhmmv\" (UniqueName: \"kubernetes.io/projected/6ed4f640-d481-4e7a-92eb-f0eda82e138c-kube-api-access-xhmmv\") pod \"kube-state-metrics-7bbc969446-dldw9\" (UID: \"6ed4f640-d481-4e7a-92eb-f0eda82e138c\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dldw9" Mar 18 13:23:53.072511 master-0 kubenswrapper[28504]: I0318 13:23:53.072445 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2msq\" (UniqueName: \"kubernetes.io/projected/b856d226-a137-4954-82c5-5929d579387a-kube-api-access-n2msq\") pod \"node-exporter-f55c6\" (UID: \"b856d226-a137-4954-82c5-5929d579387a\") " pod="openshift-monitoring/node-exporter-f55c6" Mar 18 13:23:53.094821 master-0 kubenswrapper[28504]: I0318 13:23:53.094684 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s9rk\" (UniqueName: \"kubernetes.io/projected/3c0d0048-6d96-459c-8742-2f092af44a6a-kube-api-access-2s9rk\") pod \"openshift-state-metrics-5dc6c74576-bshl9\" (UID: \"3c0d0048-6d96-459c-8742-2f092af44a6a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-bshl9" Mar 18 13:23:53.117154 master-0 kubenswrapper[28504]: I0318 13:23:53.117053 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpdw6\" (UniqueName: \"kubernetes.io/projected/b79758b7-9129-496c-abec-80d455648454-kube-api-access-lpdw6\") pod \"metrics-server-648866dd9c-ztkrd\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:53.136581 master-0 kubenswrapper[28504]: E0318 13:23:53.136518 28504 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:23:53.136581 master-0 kubenswrapper[28504]: E0318 13:23:53.136569 28504 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:23:53.136858 master-0 kubenswrapper[28504]: E0318 13:23:53.136646 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access podName:810ed1fb-bd32-4e5d-94e6-011f21ff37d3 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:53.636624236 +0000 UTC m=+11.131430011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access") pod "installer-3-master-0" (UID: "810ed1fb-bd32-4e5d-94e6-011f21ff37d3") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:23:53.153453 master-0 kubenswrapper[28504]: I0318 13:23:53.153391 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:23:53.190378 master-0 kubenswrapper[28504]: I0318 13:23:53.190289 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:23:53.233640 master-0 kubenswrapper[28504]: I0318 13:23:53.233527 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:23:53.239681 master-0 kubenswrapper[28504]: I0318 13:23:53.239580 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:23:53.246688 master-0 kubenswrapper[28504]: I0318 13:23:53.246649 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:53.246779 master-0 kubenswrapper[28504]: I0318 13:23:53.246734 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:53.272612 master-0 kubenswrapper[28504]: I0318 13:23:53.272437 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:53.272612 master-0 kubenswrapper[28504]: I0318 13:23:53.272581 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:23:53.322989 master-0 kubenswrapper[28504]: I0318 13:23:53.322000 28504 scope.go:117] "RemoveContainer" containerID="0f1b7521916bb1f15f4a8946c701639d4de35a4fc8e0cbdc319661e84db6acb6" Mar 18 13:23:53.370710 master-0 kubenswrapper[28504]: I0318 13:23:53.370654 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:23:53.371228 master-0 kubenswrapper[28504]: I0318 13:23:53.371205 28504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:23:53.380230 master-0 kubenswrapper[28504]: I0318 13:23:53.380093 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:23:53.421138 master-0 kubenswrapper[28504]: E0318 13:23:53.421093 28504 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:53.421457 master-0 kubenswrapper[28504]: I0318 13:23:53.421438 28504 scope.go:117] "RemoveContainer" containerID="f8b0391a9dd6a8a76a315386f50081873095d6505ee1824ca4cf57436b5940a3" Mar 18 13:23:53.585798 master-0 kubenswrapper[28504]: I0318 13:23:53.585748 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:23:53.623926 master-0 kubenswrapper[28504]: I0318 13:23:53.623879 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:23:53.643611 master-0 kubenswrapper[28504]: I0318 13:23:53.643565 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:53.643611 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:23:53.643611 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:23:53.643611 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:23:53.643830 master-0 kubenswrapper[28504]: I0318 13:23:53.643624 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:53.662037 master-0 kubenswrapper[28504]: I0318 13:23:53.661561 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:53.662037 master-0 kubenswrapper[28504]: E0318 13:23:53.661723 28504 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:23:53.662037 master-0 kubenswrapper[28504]: E0318 13:23:53.661749 28504 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:23:53.662037 master-0 kubenswrapper[28504]: E0318 13:23:53.661804 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access podName:810ed1fb-bd32-4e5d-94e6-011f21ff37d3 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:54.6617866 +0000 UTC m=+12.156592375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access") pod "installer-3-master-0" (UID: "810ed1fb-bd32-4e5d-94e6-011f21ff37d3") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:23:54.100460 master-0 kubenswrapper[28504]: I0318 13:23:54.100388 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:54.327663 master-0 kubenswrapper[28504]: I0318 13:23:54.327601 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/5.log" Mar 18 13:23:54.328025 master-0 kubenswrapper[28504]: I0318 13:23:54.327971 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-xwqsb" event={"ID":"f2b92a53-0b61-4e1d-a306-f9a498e48b38","Type":"ContainerStarted","Data":"00095a184255e2678a9787e66d106ececd7ab2ff7685c1efe3135e275693a239"} Mar 18 13:23:54.330454 master-0 kubenswrapper[28504]: I0318 13:23:54.330423 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-check-endpoints/0.log" Mar 18 13:23:54.332208 master-0 kubenswrapper[28504]: I0318 13:23:54.332178 28504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:23:54.332666 master-0 kubenswrapper[28504]: I0318 13:23:54.332635 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"3cf046bac848fd9b98ca4687c11ebc4dab0ff9457003912da71140fe9d182292"} Mar 18 13:23:54.336425 master-0 kubenswrapper[28504]: I0318 13:23:54.336380 28504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:23:54.395172 master-0 kubenswrapper[28504]: I0318 13:23:54.394164 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:23:54.642465 master-0 kubenswrapper[28504]: I0318 13:23:54.642409 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:54.642465 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:23:54.642465 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:23:54.642465 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:23:54.642795 master-0 kubenswrapper[28504]: I0318 13:23:54.642471 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:54.714654 master-0 kubenswrapper[28504]: I0318 13:23:54.714614 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:54.715062 master-0 kubenswrapper[28504]: E0318 13:23:54.715047 28504 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:23:54.715151 master-0 kubenswrapper[28504]: E0318 13:23:54.715140 28504 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:23:54.715252 master-0 kubenswrapper[28504]: E0318 13:23:54.715243 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access podName:810ed1fb-bd32-4e5d-94e6-011f21ff37d3 nodeName:}" failed. No retries permitted until 2026-03-18 13:23:56.715226789 +0000 UTC m=+14.210032564 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access") pod "installer-3-master-0" (UID: "810ed1fb-bd32-4e5d-94e6-011f21ff37d3") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:23:54.856050 master-0 kubenswrapper[28504]: I0318 13:23:54.855957 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=1.855919796 podStartE2EDuration="1.855919796s" podCreationTimestamp="2026-03-18 13:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:23:54.787117836 +0000 UTC m=+12.281923621" watchObservedRunningTime="2026-03-18 13:23:54.855919796 +0000 UTC m=+12.350725571" Mar 18 13:23:55.157457 master-0 kubenswrapper[28504]: I0318 13:23:55.157317 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=2.157267437 podStartE2EDuration="2.157267437s" podCreationTimestamp="2026-03-18 13:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:23:55.155533418 +0000 UTC m=+12.650339223" watchObservedRunningTime="2026-03-18 13:23:55.157267437 +0000 UTC m=+12.652073212" Mar 18 13:23:55.204443 master-0 kubenswrapper[28504]: I0318 13:23:55.204387 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:23:55.336219 master-0 kubenswrapper[28504]: I0318 13:23:55.336173 28504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:23:55.338250 master-0 kubenswrapper[28504]: I0318 13:23:55.338205 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:23:55.343016 master-0 kubenswrapper[28504]: I0318 13:23:55.342650 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-kbpvr" Mar 18 13:23:55.594449 master-0 kubenswrapper[28504]: I0318 13:23:55.594398 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:23:55.603799 master-0 kubenswrapper[28504]: I0318 13:23:55.603751 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-4v84b" Mar 18 13:23:55.645274 master-0 kubenswrapper[28504]: I0318 13:23:55.645184 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:55.645274 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:23:55.645274 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:23:55.645274 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:23:55.645549 master-0 kubenswrapper[28504]: I0318 13:23:55.645326 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:55.762349 master-0 kubenswrapper[28504]: I0318 13:23:55.762290 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:23:55.950315 master-0 kubenswrapper[28504]: I0318 13:23:55.950214 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:55.954820 master-0 kubenswrapper[28504]: I0318 13:23:55.954778 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:56.342104 master-0 kubenswrapper[28504]: I0318 13:23:56.341955 28504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:23:56.374073 master-0 kubenswrapper[28504]: I0318 13:23:56.373970 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 13:23:56.391119 master-0 kubenswrapper[28504]: I0318 13:23:56.391069 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 13:23:56.617163 master-0 kubenswrapper[28504]: I0318 13:23:56.616961 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:23:56.618702 master-0 kubenswrapper[28504]: I0318 13:23:56.618666 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:23:56.623056 master-0 kubenswrapper[28504]: I0318 13:23:56.622371 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:23:56.643300 master-0 kubenswrapper[28504]: I0318 13:23:56.643231 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:56.643300 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:23:56.643300 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:23:56.643300 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:23:56.643639 master-0 kubenswrapper[28504]: I0318 13:23:56.643307 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:56.745617 master-0 kubenswrapper[28504]: I0318 13:23:56.745532 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:56.746142 master-0 kubenswrapper[28504]: E0318 13:23:56.746069 28504 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:23:56.746142 master-0 kubenswrapper[28504]: E0318 13:23:56.746131 28504 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:23:56.746260 master-0 kubenswrapper[28504]: E0318 13:23:56.746209 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access podName:810ed1fb-bd32-4e5d-94e6-011f21ff37d3 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:00.746188453 +0000 UTC m=+18.240994228 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access") pod "installer-3-master-0" (UID: "810ed1fb-bd32-4e5d-94e6-011f21ff37d3") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:23:56.763962 master-0 kubenswrapper[28504]: I0318 13:23:56.763882 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:23:56.768420 master-0 kubenswrapper[28504]: I0318 13:23:56.768384 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:23:56.880058 master-0 kubenswrapper[28504]: I0318 13:23:56.879857 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:56.932463 master-0 kubenswrapper[28504]: I0318 13:23:56.932406 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:23:56.935868 master-0 kubenswrapper[28504]: I0318 13:23:56.935824 28504 patch_prober.go:28] interesting pod/metrics-server-648866dd9c-ztkrd container/metrics-server namespace/openshift-monitoring: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 13:23:56.935868 master-0 kubenswrapper[28504]: [+]log ok Mar 18 13:23:56.935868 master-0 kubenswrapper[28504]: [+]poststarthook/max-in-flight-filter ok Mar 18 13:23:56.935868 master-0 kubenswrapper[28504]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 13:23:56.935868 master-0 kubenswrapper[28504]: [-]metric-storage-ready failed: reason withheld Mar 18 13:23:56.935868 master-0 kubenswrapper[28504]: [+]metric-informer-sync ok Mar 18 13:23:56.935868 master-0 kubenswrapper[28504]: [+]metadata-informer-sync ok Mar 18 13:23:56.935868 master-0 kubenswrapper[28504]: [+]shutdown ok Mar 18 13:23:56.935868 master-0 kubenswrapper[28504]: readyz check failed Mar 18 13:23:56.936474 master-0 kubenswrapper[28504]: I0318 13:23:56.935872 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" podUID="b79758b7-9129-496c-abec-80d455648454" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:57.023227 master-0 kubenswrapper[28504]: I0318 13:23:57.023181 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:23:57.353615 master-0 kubenswrapper[28504]: I0318 13:23:57.353556 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:23:57.354130 master-0 kubenswrapper[28504]: I0318 13:23:57.353660 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:23:57.641805 master-0 kubenswrapper[28504]: I0318 13:23:57.641289 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:57.642208 master-0 kubenswrapper[28504]: I0318 13:23:57.642176 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:57.642208 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:23:57.642208 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:23:57.642208 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:23:57.642340 master-0 kubenswrapper[28504]: I0318 13:23:57.642222 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:57.648327 master-0 kubenswrapper[28504]: I0318 13:23:57.648259 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7d95bbc4f4-4ch22" Mar 18 13:23:57.670268 master-0 kubenswrapper[28504]: I0318 13:23:57.670223 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:57.674744 master-0 kubenswrapper[28504]: I0318 13:23:57.674695 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-574f6d5bf6-8krhk" Mar 18 13:23:58.365596 master-0 kubenswrapper[28504]: I0318 13:23:58.365526 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:23:58.642635 master-0 kubenswrapper[28504]: I0318 13:23:58.642496 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:58.642635 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:23:58.642635 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:23:58.642635 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:23:58.642635 master-0 kubenswrapper[28504]: I0318 13:23:58.642559 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:59.201616 master-0 kubenswrapper[28504]: I0318 13:23:59.201566 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:23:59.243536 master-0 kubenswrapper[28504]: I0318 13:23:59.243477 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-459lq" Mar 18 13:23:59.647030 master-0 kubenswrapper[28504]: I0318 13:23:59.644006 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:59.647030 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:23:59.647030 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:23:59.647030 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:23:59.647030 master-0 kubenswrapper[28504]: I0318 13:23:59.644118 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:00.004136 master-0 kubenswrapper[28504]: I0318 13:24:00.004101 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:24:00.643009 master-0 kubenswrapper[28504]: I0318 13:24:00.642913 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:00.643009 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:24:00.643009 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:24:00.643009 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:24:00.643440 master-0 kubenswrapper[28504]: I0318 13:24:00.643407 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:00.804758 master-0 kubenswrapper[28504]: I0318 13:24:00.804683 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:00.805223 master-0 kubenswrapper[28504]: E0318 13:24:00.804850 28504 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:00.805223 master-0 kubenswrapper[28504]: E0318 13:24:00.804879 28504 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:00.805223 master-0 kubenswrapper[28504]: E0318 13:24:00.804958 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access podName:810ed1fb-bd32-4e5d-94e6-011f21ff37d3 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:08.804920099 +0000 UTC m=+26.299725874 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access") pod "installer-3-master-0" (UID: "810ed1fb-bd32-4e5d-94e6-011f21ff37d3") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:01.642759 master-0 kubenswrapper[28504]: I0318 13:24:01.642690 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:01.642759 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:24:01.642759 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:24:01.642759 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:24:01.643202 master-0 kubenswrapper[28504]: I0318 13:24:01.642762 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:02.168558 master-0 kubenswrapper[28504]: I0318 13:24:02.168494 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:24:02.170928 master-0 kubenswrapper[28504]: I0318 13:24:02.170893 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-4r95z" Mar 18 13:24:02.642731 master-0 kubenswrapper[28504]: I0318 13:24:02.642673 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:02.642731 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:24:02.642731 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:24:02.642731 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:24:02.643094 master-0 kubenswrapper[28504]: I0318 13:24:02.642745 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:03.643853 master-0 kubenswrapper[28504]: I0318 13:24:03.643802 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:03.643853 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:24:03.643853 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:24:03.643853 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:24:03.644820 master-0 kubenswrapper[28504]: I0318 13:24:03.643855 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:04.475122 master-0 kubenswrapper[28504]: I0318 13:24:04.475058 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:24:04.526163 master-0 kubenswrapper[28504]: I0318 13:24:04.526089 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d7pj2" Mar 18 13:24:04.644431 master-0 kubenswrapper[28504]: I0318 13:24:04.644351 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:04.644431 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:24:04.644431 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:24:04.644431 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:24:04.645295 master-0 kubenswrapper[28504]: I0318 13:24:04.644430 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:05.250996 master-0 kubenswrapper[28504]: I0318 13:24:05.250766 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:24:05.294580 master-0 kubenswrapper[28504]: I0318 13:24:05.294541 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p546b" Mar 18 13:24:05.643140 master-0 kubenswrapper[28504]: I0318 13:24:05.643003 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:05.643140 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:24:05.643140 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:24:05.643140 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:24:05.643140 master-0 kubenswrapper[28504]: I0318 13:24:05.643074 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:05.814958 master-0 kubenswrapper[28504]: I0318 13:24:05.814892 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:24:05.981000 master-0 kubenswrapper[28504]: I0318 13:24:05.980920 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nhwvw" Mar 18 13:24:06.643249 master-0 kubenswrapper[28504]: I0318 13:24:06.643158 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:06.643249 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:24:06.643249 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:24:06.643249 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:24:06.643672 master-0 kubenswrapper[28504]: I0318 13:24:06.643252 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:06.883855 master-0 kubenswrapper[28504]: I0318 13:24:06.883792 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:07.642554 master-0 kubenswrapper[28504]: I0318 13:24:07.642499 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:07.642554 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:24:07.642554 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:24:07.642554 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:24:07.642902 master-0 kubenswrapper[28504]: I0318 13:24:07.642571 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:08.642167 master-0 kubenswrapper[28504]: I0318 13:24:08.642104 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:08.642167 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:24:08.642167 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:24:08.642167 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:24:08.642714 master-0 kubenswrapper[28504]: I0318 13:24:08.642184 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:08.842163 master-0 kubenswrapper[28504]: I0318 13:24:08.842099 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:08.842435 master-0 kubenswrapper[28504]: E0318 13:24:08.842390 28504 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:08.842479 master-0 kubenswrapper[28504]: E0318 13:24:08.842440 28504 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:08.842540 master-0 kubenswrapper[28504]: E0318 13:24:08.842507 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access podName:810ed1fb-bd32-4e5d-94e6-011f21ff37d3 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:24.842484889 +0000 UTC m=+42.337290664 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access") pod "installer-3-master-0" (UID: "810ed1fb-bd32-4e5d-94e6-011f21ff37d3") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:09.642489 master-0 kubenswrapper[28504]: I0318 13:24:09.642425 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:09.642489 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:24:09.642489 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:24:09.642489 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:24:09.643125 master-0 kubenswrapper[28504]: I0318 13:24:09.642498 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:10.437277 master-0 kubenswrapper[28504]: I0318 13:24:10.437096 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:24:10.437497 master-0 kubenswrapper[28504]: I0318 13:24:10.437330 28504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:24:10.457413 master-0 kubenswrapper[28504]: I0318 13:24:10.457359 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pfs29" Mar 18 13:24:10.643283 master-0 kubenswrapper[28504]: I0318 13:24:10.643203 28504 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-mtnzv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:10.643283 master-0 kubenswrapper[28504]: [-]has-synced failed: reason withheld Mar 18 13:24:10.643283 master-0 kubenswrapper[28504]: [+]process-running ok Mar 18 13:24:10.643283 master-0 kubenswrapper[28504]: healthz check failed Mar 18 13:24:10.643934 master-0 kubenswrapper[28504]: I0318 13:24:10.643302 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" podUID="ab9ef7c0-f9f2-4048-9857-06ab48f36ecf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:11.643445 master-0 kubenswrapper[28504]: I0318 13:24:11.643322 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:24:11.647656 master-0 kubenswrapper[28504]: I0318 13:24:11.647606 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-7dcf5569b5-mtnzv" Mar 18 13:24:16.010261 master-0 kubenswrapper[28504]: I0318 13:24:16.010160 28504 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:24:16.011550 master-0 kubenswrapper[28504]: I0318 13:24:16.011460 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" containerID="cri-o://7c596290b16b735fd2873e580d133696dfedc347b3f8e0e91a59ac0b73f33ad7" gracePeriod=5 Mar 18 13:24:16.937991 master-0 kubenswrapper[28504]: I0318 13:24:16.937913 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:24:21.552379 master-0 kubenswrapper[28504]: I0318 13:24:21.552318 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_8e7a82869988463543d3d8dd1f0b5fe3/startup-monitor/0.log" Mar 18 13:24:21.552888 master-0 kubenswrapper[28504]: I0318 13:24:21.552390 28504 generic.go:334] "Generic (PLEG): container finished" podID="8e7a82869988463543d3d8dd1f0b5fe3" containerID="7c596290b16b735fd2873e580d133696dfedc347b3f8e0e91a59ac0b73f33ad7" exitCode=137 Mar 18 13:24:22.659915 master-0 kubenswrapper[28504]: I0318 13:24:22.659863 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_8e7a82869988463543d3d8dd1f0b5fe3/startup-monitor/0.log" Mar 18 13:24:22.660462 master-0 kubenswrapper[28504]: I0318 13:24:22.659969 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:22.730572 master-0 kubenswrapper[28504]: I0318 13:24:22.730492 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 13:24:22.730802 master-0 kubenswrapper[28504]: I0318 13:24:22.730612 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 13:24:22.730802 master-0 kubenswrapper[28504]: I0318 13:24:22.730643 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 13:24:22.730802 master-0 kubenswrapper[28504]: I0318 13:24:22.730633 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock" (OuterVolumeSpecName: "var-lock") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:24:22.730802 master-0 kubenswrapper[28504]: I0318 13:24:22.730670 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 13:24:22.730802 master-0 kubenswrapper[28504]: I0318 13:24:22.730689 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 13:24:22.730802 master-0 kubenswrapper[28504]: I0318 13:24:22.730703 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log" (OuterVolumeSpecName: "var-log") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:24:22.730802 master-0 kubenswrapper[28504]: I0318 13:24:22.730731 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:24:22.731049 master-0 kubenswrapper[28504]: I0318 13:24:22.731010 28504 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:22.731049 master-0 kubenswrapper[28504]: I0318 13:24:22.731024 28504 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:22.731049 master-0 kubenswrapper[28504]: I0318 13:24:22.731034 28504 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:22.731142 master-0 kubenswrapper[28504]: I0318 13:24:22.731102 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests" (OuterVolumeSpecName: "manifests") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:24:22.735694 master-0 kubenswrapper[28504]: I0318 13:24:22.735641 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:24:22.756460 master-0 kubenswrapper[28504]: I0318 13:24:22.756408 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e7a82869988463543d3d8dd1f0b5fe3" path="/var/lib/kubelet/pods/8e7a82869988463543d3d8dd1f0b5fe3/volumes" Mar 18 13:24:22.756680 master-0 kubenswrapper[28504]: I0318 13:24:22.756648 28504 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 18 13:24:22.832156 master-0 kubenswrapper[28504]: I0318 13:24:22.832067 28504 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:22.832156 master-0 kubenswrapper[28504]: I0318 13:24:22.832128 28504 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:23.565923 master-0 kubenswrapper[28504]: I0318 13:24:23.565889 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_8e7a82869988463543d3d8dd1f0b5fe3/startup-monitor/0.log" Mar 18 13:24:23.566294 master-0 kubenswrapper[28504]: I0318 13:24:23.566280 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:24.056758 master-0 kubenswrapper[28504]: E0318 13:24:24.056705 28504 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.308s" Mar 18 13:24:24.056758 master-0 kubenswrapper[28504]: I0318 13:24:24.056759 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:24:24.057449 master-0 kubenswrapper[28504]: I0318 13:24:24.056780 28504 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="e1c69095-8fa0-40d4-ae45-2ef81798edce" Mar 18 13:24:24.057449 master-0 kubenswrapper[28504]: I0318 13:24:24.056854 28504 scope.go:117] "RemoveContainer" containerID="7c596290b16b735fd2873e580d133696dfedc347b3f8e0e91a59ac0b73f33ad7" Mar 18 13:24:24.060739 master-0 kubenswrapper[28504]: I0318 13:24:24.060675 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:24:24.060857 master-0 kubenswrapper[28504]: I0318 13:24:24.060739 28504 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="e1c69095-8fa0-40d4-ae45-2ef81798edce" Mar 18 13:24:24.069047 master-0 kubenswrapper[28504]: I0318 13:24:24.068921 28504 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1c69095-8fa0-40d4-ae45-2ef81798edce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T13:24:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T13:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [startup-monitor]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T13:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [startup-monitor]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c596290b16b735fd2873e580d133696dfedc347b3f8e0e91a59ac0b73f33ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"startup-monitor\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c596290b16b735fd2873e580d133696dfedc347b3f8e0e91a59ac0b73f33ad7\\\",\\\"exitCode\\\":137,\\\"finishedAt\\\":\\\"2026-03-18T13:24:21Z\\\",\\\"message\\\":\\\"c0008e6980)({\\\\n restConfig: (*rest.Config)(\\\\u003cnil\\\\u003e),\\\\n client: (*http.Client)(\\\\u003cnil\\\\u003e),\\\\n baseRawURL: (string) (len=22) \\\\\\\"https://localhost:6443\\\\\\\",\\\\n kubeClient: (*kubernetes.Clientset)(\\\\u003cnil\\\\u003e),\\\\n currentNodeName: (string) \\\\\\\"\\\\\\\"\\\\n })\\\\n}\\\\nI0318 13:23:40.944356 1 monitor.go:79] Waiting for readiness (interval 1s, timeout 5m0s)...\\\\nI0318 13:23:41.247634 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0318 13:23:41.247665 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0318 13:23:41.250796 1 monitor.go:103] Watching kube-apiserver of revision 3: waiting for kube-apiserver static pod to listen on port 6443: Get \\\\\\\"https://localhost:6443/healthz/etcd\\\\\\\": dial tcp [::1]:6443: connect: connection refused (NetworkError)\\\\nI0318 13:23:48.857964 1 monitor.go:103] Watching kube-apiserver of revision 3: waiting for kube-apiserver static pod for node master-0 to show up (PodNotRunning)\\\\nI0318 13:23:50.166743 1 monitor.go:103] Watching kube-apiserver of revision 3: waiting for kube-apiserver static pod for node master-0 to show up (PodNotRunning)\\\\nI0318 13:23:51.516575 1 monitor.go:103] Watching kube-apiserver of revision 3: waiting for kube-apiserver static pod for node master-0 to show up (PodNotRunning)\\\\nI0318 13:23:52.820417 1 monitor.go:103] Watching kube-apiserver of revision 3: waiting for kube-apiserver static pod for node master-0 to show up (PodNotRunning)\\\\nI0318 13:24:04.278140 1 monitor.go:103] Watching kube-apiserver of revision 3: waiting for kube-apiserver static pod kube-apiserver-master-0 to be ready (PodNodReady)\\\\nI0318 13:24:15.957381 1 fallback.go:205] Created a symlink /etc/kubernetes/static-pod-resources/kube-apiserver-last-known-good for /etc/kubernetes/static-pod-resources/kube-apiserver-pod-3/kube-apiserver-pod.yaml\\\\nI0318 13:24:16.009135 1 cmd.go:202] Waiting for SIGTERM...\\\\nI0318 13:24:16.015154 1 signal.go:18] Received SIGTERM or SIGINT signal, shutting down the process.\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T13:23:40Z\\\"}}}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-startup-monitor-master-0\": pods \"kube-apiserver-startup-monitor-master-0\" not found" Mar 18 13:24:24.930612 master-0 kubenswrapper[28504]: I0318 13:24:24.930551 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:24.930834 master-0 kubenswrapper[28504]: E0318 13:24:24.930738 28504 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:24.930834 master-0 kubenswrapper[28504]: E0318 13:24:24.930767 28504 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:24.930971 master-0 kubenswrapper[28504]: E0318 13:24:24.930842 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access podName:810ed1fb-bd32-4e5d-94e6-011f21ff37d3 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:56.930809357 +0000 UTC m=+74.425615142 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access") pod "installer-3-master-0" (UID: "810ed1fb-bd32-4e5d-94e6-011f21ff37d3") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:26.901105 master-0 kubenswrapper[28504]: I0318 13:24:26.901030 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-nzrmh"] Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: E0318 13:24:26.901346 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d262b4-b1a7-49b8-a8d2-1bb1ea671df8" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: I0318 13:24:26.901365 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d262b4-b1a7-49b8-a8d2-1bb1ea671df8" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: E0318 13:24:26.901385 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: I0318 13:24:26.901394 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: E0318 13:24:26.901412 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: I0318 13:24:26.901421 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: E0318 13:24:26.901434 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d88fc1-4e92-432e-ac2c-e1c489b15e93" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: I0318 13:24:26.901443 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d88fc1-4e92-432e-ac2c-e1c489b15e93" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: E0318 13:24:26.901462 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810ed1fb-bd32-4e5d-94e6-011f21ff37d3" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: I0318 13:24:26.901470 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="810ed1fb-bd32-4e5d-94e6-011f21ff37d3" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: E0318 13:24:26.901485 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f32b4d4d-df54-4fa7-a940-297e064fea44" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: I0318 13:24:26.901493 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="f32b4d4d-df54-4fa7-a940-297e064fea44" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: E0318 13:24:26.901503 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2669bc40-9271-4494-9e21-290cd4383b05" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: I0318 13:24:26.901511 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="2669bc40-9271-4494-9e21-290cd4383b05" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: E0318 13:24:26.901519 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fca2c29-3791-43b8-97f1-a9a6d58ec92d" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: I0318 13:24:26.901528 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fca2c29-3791-43b8-97f1-a9a6d58ec92d" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: E0318 13:24:26.901539 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0403564-f8d9-4d81-b9e3-d9028fe58590" containerName="assisted-installer-controller" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: I0318 13:24:26.901547 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0403564-f8d9-4d81-b9e3-d9028fe58590" containerName="assisted-installer-controller" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: E0318 13:24:26.901565 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="245f3af1-ccfb-4191-9a34-00852e52a73d" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: I0318 13:24:26.901573 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="245f3af1-ccfb-4191-9a34-00852e52a73d" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: E0318 13:24:26.901586 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88cd8323-8857-41fe-85d4-e6064330ec71" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: I0318 13:24:26.901594 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="88cd8323-8857-41fe-85d4-e6064330ec71" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: E0318 13:24:26.901608 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: I0318 13:24:26.901616 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: E0318 13:24:26.901630 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5879ced8-4ac1-40e3-bf93-38b8a7497823" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: I0318 13:24:26.901637 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="5879ced8-4ac1-40e3-bf93-38b8a7497823" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: E0318 13:24:26.901649 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: I0318 13:24:26.901657 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: I0318 13:24:26.901786 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="5879ced8-4ac1-40e3-bf93-38b8a7497823" containerName="installer" Mar 18 13:24:26.901789 master-0 kubenswrapper[28504]: I0318 13:24:26.901811 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 13:24:26.903270 master-0 kubenswrapper[28504]: I0318 13:24:26.901829 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 13:24:26.903270 master-0 kubenswrapper[28504]: I0318 13:24:26.901846 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="810ed1fb-bd32-4e5d-94e6-011f21ff37d3" containerName="installer" Mar 18 13:24:26.903270 master-0 kubenswrapper[28504]: I0318 13:24:26.901863 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0403564-f8d9-4d81-b9e3-d9028fe58590" containerName="assisted-installer-controller" Mar 18 13:24:26.903270 master-0 kubenswrapper[28504]: I0318 13:24:26.901878 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 18 13:24:26.903270 master-0 kubenswrapper[28504]: I0318 13:24:26.901889 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d88fc1-4e92-432e-ac2c-e1c489b15e93" containerName="installer" Mar 18 13:24:26.903270 master-0 kubenswrapper[28504]: I0318 13:24:26.901904 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 13:24:26.903270 master-0 kubenswrapper[28504]: I0318 13:24:26.901916 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fca2c29-3791-43b8-97f1-a9a6d58ec92d" containerName="installer" Mar 18 13:24:26.903270 master-0 kubenswrapper[28504]: I0318 13:24:26.901926 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d262b4-b1a7-49b8-a8d2-1bb1ea671df8" containerName="installer" Mar 18 13:24:26.903270 master-0 kubenswrapper[28504]: I0318 13:24:26.901959 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="f32b4d4d-df54-4fa7-a940-297e064fea44" containerName="installer" Mar 18 13:24:26.903270 master-0 kubenswrapper[28504]: I0318 13:24:26.901974 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="88cd8323-8857-41fe-85d4-e6064330ec71" containerName="installer" Mar 18 13:24:26.903270 master-0 kubenswrapper[28504]: I0318 13:24:26.901991 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="2669bc40-9271-4494-9e21-290cd4383b05" containerName="installer" Mar 18 13:24:26.903270 master-0 kubenswrapper[28504]: I0318 13:24:26.902002 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="245f3af1-ccfb-4191-9a34-00852e52a73d" containerName="installer" Mar 18 13:24:26.903270 master-0 kubenswrapper[28504]: I0318 13:24:26.902525 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nzrmh" Mar 18 13:24:26.904779 master-0 kubenswrapper[28504]: I0318 13:24:26.904719 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 18 13:24:26.905750 master-0 kubenswrapper[28504]: I0318 13:24:26.905710 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-dttzl" Mar 18 13:24:27.061453 master-0 kubenswrapper[28504]: I0318 13:24:27.061180 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2vhs\" (UniqueName: \"kubernetes.io/projected/ebbaf8e6-9de8-44ce-9f6c-bb4804723598-kube-api-access-l2vhs\") pod \"node-ca-nzrmh\" (UID: \"ebbaf8e6-9de8-44ce-9f6c-bb4804723598\") " pod="openshift-image-registry/node-ca-nzrmh" Mar 18 13:24:27.061453 master-0 kubenswrapper[28504]: I0318 13:24:27.061445 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ebbaf8e6-9de8-44ce-9f6c-bb4804723598-host\") pod \"node-ca-nzrmh\" (UID: \"ebbaf8e6-9de8-44ce-9f6c-bb4804723598\") " pod="openshift-image-registry/node-ca-nzrmh" Mar 18 13:24:27.061750 master-0 kubenswrapper[28504]: I0318 13:24:27.061488 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ebbaf8e6-9de8-44ce-9f6c-bb4804723598-serviceca\") pod \"node-ca-nzrmh\" (UID: \"ebbaf8e6-9de8-44ce-9f6c-bb4804723598\") " pod="openshift-image-registry/node-ca-nzrmh" Mar 18 13:24:27.162459 master-0 kubenswrapper[28504]: I0318 13:24:27.162323 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ebbaf8e6-9de8-44ce-9f6c-bb4804723598-host\") pod \"node-ca-nzrmh\" (UID: \"ebbaf8e6-9de8-44ce-9f6c-bb4804723598\") " pod="openshift-image-registry/node-ca-nzrmh" Mar 18 13:24:27.162459 master-0 kubenswrapper[28504]: I0318 13:24:27.162415 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ebbaf8e6-9de8-44ce-9f6c-bb4804723598-serviceca\") pod \"node-ca-nzrmh\" (UID: \"ebbaf8e6-9de8-44ce-9f6c-bb4804723598\") " pod="openshift-image-registry/node-ca-nzrmh" Mar 18 13:24:27.162712 master-0 kubenswrapper[28504]: I0318 13:24:27.162446 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ebbaf8e6-9de8-44ce-9f6c-bb4804723598-host\") pod \"node-ca-nzrmh\" (UID: \"ebbaf8e6-9de8-44ce-9f6c-bb4804723598\") " pod="openshift-image-registry/node-ca-nzrmh" Mar 18 13:24:27.162712 master-0 kubenswrapper[28504]: I0318 13:24:27.162563 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2vhs\" (UniqueName: \"kubernetes.io/projected/ebbaf8e6-9de8-44ce-9f6c-bb4804723598-kube-api-access-l2vhs\") pod \"node-ca-nzrmh\" (UID: \"ebbaf8e6-9de8-44ce-9f6c-bb4804723598\") " pod="openshift-image-registry/node-ca-nzrmh" Mar 18 13:24:27.163161 master-0 kubenswrapper[28504]: I0318 13:24:27.163108 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ebbaf8e6-9de8-44ce-9f6c-bb4804723598-serviceca\") pod \"node-ca-nzrmh\" (UID: \"ebbaf8e6-9de8-44ce-9f6c-bb4804723598\") " pod="openshift-image-registry/node-ca-nzrmh" Mar 18 13:24:27.255259 master-0 kubenswrapper[28504]: I0318 13:24:27.255214 28504 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 13:24:27.258065 master-0 kubenswrapper[28504]: I0318 13:24:27.258018 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2vhs\" (UniqueName: \"kubernetes.io/projected/ebbaf8e6-9de8-44ce-9f6c-bb4804723598-kube-api-access-l2vhs\") pod \"node-ca-nzrmh\" (UID: \"ebbaf8e6-9de8-44ce-9f6c-bb4804723598\") " pod="openshift-image-registry/node-ca-nzrmh" Mar 18 13:24:27.516362 master-0 kubenswrapper[28504]: I0318 13:24:27.516280 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nzrmh" Mar 18 13:24:27.539466 master-0 kubenswrapper[28504]: W0318 13:24:27.537582 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebbaf8e6_9de8_44ce_9f6c_bb4804723598.slice/crio-a0b1b0fb060df8094f76c3b103f05692cb0c797116971ff9c8c9fac10a4e3718 WatchSource:0}: Error finding container a0b1b0fb060df8094f76c3b103f05692cb0c797116971ff9c8c9fac10a4e3718: Status 404 returned error can't find the container with id a0b1b0fb060df8094f76c3b103f05692cb0c797116971ff9c8c9fac10a4e3718 Mar 18 13:24:27.540257 master-0 kubenswrapper[28504]: I0318 13:24:27.540011 28504 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 13:24:27.612112 master-0 kubenswrapper[28504]: I0318 13:24:27.612049 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nzrmh" event={"ID":"ebbaf8e6-9de8-44ce-9f6c-bb4804723598","Type":"ContainerStarted","Data":"a0b1b0fb060df8094f76c3b103f05692cb0c797116971ff9c8c9fac10a4e3718"} Mar 18 13:24:30.634156 master-0 kubenswrapper[28504]: I0318 13:24:30.634045 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nzrmh" event={"ID":"ebbaf8e6-9de8-44ce-9f6c-bb4804723598","Type":"ContainerStarted","Data":"b25a065ae6c5d2a84aae872a0c43e5723667ff1016cde3c567d4634f5d447fb8"} Mar 18 13:24:30.662309 master-0 kubenswrapper[28504]: I0318 13:24:30.662181 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-nzrmh" podStartSLOduration=2.032230774 podStartE2EDuration="4.662149732s" podCreationTimestamp="2026-03-18 13:24:26 +0000 UTC" firstStartedPulling="2026-03-18 13:24:27.539891977 +0000 UTC m=+45.034697762" lastFinishedPulling="2026-03-18 13:24:30.169810945 +0000 UTC m=+47.664616720" observedRunningTime="2026-03-18 13:24:30.661137124 +0000 UTC m=+48.155942909" watchObservedRunningTime="2026-03-18 13:24:30.662149732 +0000 UTC m=+48.156955517" Mar 18 13:24:39.170777 master-0 kubenswrapper[28504]: I0318 13:24:39.170720 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 13:24:39.171700 master-0 kubenswrapper[28504]: I0318 13:24:39.171670 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:24:39.173761 master-0 kubenswrapper[28504]: I0318 13:24:39.173678 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-n2lc2" Mar 18 13:24:39.173915 master-0 kubenswrapper[28504]: I0318 13:24:39.173878 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 13:24:39.203065 master-0 kubenswrapper[28504]: I0318 13:24:39.203003 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 13:24:39.245523 master-0 kubenswrapper[28504]: I0318 13:24:39.245461 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/534e47b8-8bec-4dfb-be89-fb018a5edbb0-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"534e47b8-8bec-4dfb-be89-fb018a5edbb0\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:24:39.245767 master-0 kubenswrapper[28504]: I0318 13:24:39.245548 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/534e47b8-8bec-4dfb-be89-fb018a5edbb0-kube-api-access\") pod \"installer-4-master-0\" (UID: \"534e47b8-8bec-4dfb-be89-fb018a5edbb0\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:24:39.245767 master-0 kubenswrapper[28504]: I0318 13:24:39.245573 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/534e47b8-8bec-4dfb-be89-fb018a5edbb0-var-lock\") pod \"installer-4-master-0\" (UID: \"534e47b8-8bec-4dfb-be89-fb018a5edbb0\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:24:39.346392 master-0 kubenswrapper[28504]: I0318 13:24:39.346342 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/534e47b8-8bec-4dfb-be89-fb018a5edbb0-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"534e47b8-8bec-4dfb-be89-fb018a5edbb0\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:24:39.347108 master-0 kubenswrapper[28504]: I0318 13:24:39.346449 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/534e47b8-8bec-4dfb-be89-fb018a5edbb0-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"534e47b8-8bec-4dfb-be89-fb018a5edbb0\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:24:39.347195 master-0 kubenswrapper[28504]: I0318 13:24:39.347178 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/534e47b8-8bec-4dfb-be89-fb018a5edbb0-kube-api-access\") pod \"installer-4-master-0\" (UID: \"534e47b8-8bec-4dfb-be89-fb018a5edbb0\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:24:39.347275 master-0 kubenswrapper[28504]: I0318 13:24:39.347260 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/534e47b8-8bec-4dfb-be89-fb018a5edbb0-var-lock\") pod \"installer-4-master-0\" (UID: \"534e47b8-8bec-4dfb-be89-fb018a5edbb0\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:24:39.347422 master-0 kubenswrapper[28504]: I0318 13:24:39.347374 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/534e47b8-8bec-4dfb-be89-fb018a5edbb0-var-lock\") pod \"installer-4-master-0\" (UID: \"534e47b8-8bec-4dfb-be89-fb018a5edbb0\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:24:39.379107 master-0 kubenswrapper[28504]: I0318 13:24:39.379059 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/534e47b8-8bec-4dfb-be89-fb018a5edbb0-kube-api-access\") pod \"installer-4-master-0\" (UID: \"534e47b8-8bec-4dfb-be89-fb018a5edbb0\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:24:39.489710 master-0 kubenswrapper[28504]: I0318 13:24:39.489647 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:24:40.026125 master-0 kubenswrapper[28504]: I0318 13:24:40.025834 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 13:24:40.694125 master-0 kubenswrapper[28504]: I0318 13:24:40.694053 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"534e47b8-8bec-4dfb-be89-fb018a5edbb0","Type":"ContainerStarted","Data":"dfe679cd1b91dc61ed885bc286d4067a12d5839f5bbdbd907889260931be4b1b"} Mar 18 13:24:40.694125 master-0 kubenswrapper[28504]: I0318 13:24:40.694116 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"534e47b8-8bec-4dfb-be89-fb018a5edbb0","Type":"ContainerStarted","Data":"2ae5f4c0ab40586c4ee37458baf59eca377b8a70c2d036b5815ace707c5659ee"} Mar 18 13:24:40.761309 master-0 kubenswrapper[28504]: I0318 13:24:40.761241 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=1.7612227520000001 podStartE2EDuration="1.761222752s" podCreationTimestamp="2026-03-18 13:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:24:40.757888958 +0000 UTC m=+58.252694733" watchObservedRunningTime="2026-03-18 13:24:40.761222752 +0000 UTC m=+58.256028517" Mar 18 13:24:42.891021 master-0 kubenswrapper[28504]: I0318 13:24:42.890959 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-574cc54585-w6425"] Mar 18 13:24:42.893791 master-0 kubenswrapper[28504]: I0318 13:24:42.892653 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-574cc54585-w6425" Mar 18 13:24:42.897531 master-0 kubenswrapper[28504]: I0318 13:24:42.897500 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 18 13:24:42.901203 master-0 kubenswrapper[28504]: I0318 13:24:42.901180 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-8rcx8" Mar 18 13:24:42.992481 master-0 kubenswrapper[28504]: I0318 13:24:42.991994 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-574cc54585-w6425"] Mar 18 13:24:43.006960 master-0 kubenswrapper[28504]: I0318 13:24:42.998764 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/300098ac-781e-48e5-bbab-4c1009ecf6a2-monitoring-plugin-cert\") pod \"monitoring-plugin-574cc54585-w6425\" (UID: \"300098ac-781e-48e5-bbab-4c1009ecf6a2\") " pod="openshift-monitoring/monitoring-plugin-574cc54585-w6425" Mar 18 13:24:43.100852 master-0 kubenswrapper[28504]: I0318 13:24:43.100344 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/300098ac-781e-48e5-bbab-4c1009ecf6a2-monitoring-plugin-cert\") pod \"monitoring-plugin-574cc54585-w6425\" (UID: \"300098ac-781e-48e5-bbab-4c1009ecf6a2\") " pod="openshift-monitoring/monitoring-plugin-574cc54585-w6425" Mar 18 13:24:43.112816 master-0 kubenswrapper[28504]: I0318 13:24:43.112743 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/300098ac-781e-48e5-bbab-4c1009ecf6a2-monitoring-plugin-cert\") pod \"monitoring-plugin-574cc54585-w6425\" (UID: \"300098ac-781e-48e5-bbab-4c1009ecf6a2\") " pod="openshift-monitoring/monitoring-plugin-574cc54585-w6425" Mar 18 13:24:43.213585 master-0 kubenswrapper[28504]: I0318 13:24:43.213514 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-574cc54585-w6425" Mar 18 13:24:43.741523 master-0 kubenswrapper[28504]: I0318 13:24:43.741463 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-574cc54585-w6425"] Mar 18 13:24:43.773534 master-0 kubenswrapper[28504]: I0318 13:24:43.773462 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5"] Mar 18 13:24:43.782400 master-0 kubenswrapper[28504]: I0318 13:24:43.782349 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:43.784121 master-0 kubenswrapper[28504]: I0318 13:24:43.784080 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 13:24:43.788818 master-0 kubenswrapper[28504]: I0318 13:24:43.788529 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 13:24:43.788818 master-0 kubenswrapper[28504]: I0318 13:24:43.788806 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 13:24:43.789134 master-0 kubenswrapper[28504]: I0318 13:24:43.789050 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-lqqf9" Mar 18 13:24:43.789134 master-0 kubenswrapper[28504]: I0318 13:24:43.789101 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 13:24:43.789215 master-0 kubenswrapper[28504]: I0318 13:24:43.789057 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 13:24:43.790974 master-0 kubenswrapper[28504]: I0318 13:24:43.789288 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 13:24:43.790974 master-0 kubenswrapper[28504]: I0318 13:24:43.789386 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 13:24:43.791336 master-0 kubenswrapper[28504]: I0318 13:24:43.791272 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 13:24:43.793553 master-0 kubenswrapper[28504]: I0318 13:24:43.791479 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 13:24:43.793553 master-0 kubenswrapper[28504]: I0318 13:24:43.791506 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 13:24:43.794784 master-0 kubenswrapper[28504]: I0318 13:24:43.793790 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 13:24:43.797237 master-0 kubenswrapper[28504]: I0318 13:24:43.796881 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5"] Mar 18 13:24:43.800285 master-0 kubenswrapper[28504]: I0318 13:24:43.800056 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 13:24:43.804311 master-0 kubenswrapper[28504]: I0318 13:24:43.804270 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 13:24:43.910912 master-0 kubenswrapper[28504]: I0318 13:24:43.910867 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-session\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:43.911441 master-0 kubenswrapper[28504]: I0318 13:24:43.910929 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d9d797e5-c145-4658-9318-06ee1106173f-audit-dir\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:43.911441 master-0 kubenswrapper[28504]: I0318 13:24:43.910965 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-login\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:43.911441 master-0 kubenswrapper[28504]: I0318 13:24:43.910996 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:43.911441 master-0 kubenswrapper[28504]: I0318 13:24:43.911029 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzblb\" (UniqueName: \"kubernetes.io/projected/d9d797e5-c145-4658-9318-06ee1106173f-kube-api-access-tzblb\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:43.911441 master-0 kubenswrapper[28504]: I0318 13:24:43.911187 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:43.911441 master-0 kubenswrapper[28504]: I0318 13:24:43.911243 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:43.911441 master-0 kubenswrapper[28504]: I0318 13:24:43.911285 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:43.911441 master-0 kubenswrapper[28504]: I0318 13:24:43.911332 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:43.911441 master-0 kubenswrapper[28504]: I0318 13:24:43.911439 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-error\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:43.911831 master-0 kubenswrapper[28504]: I0318 13:24:43.911475 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:43.911831 master-0 kubenswrapper[28504]: I0318 13:24:43.911568 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:43.911831 master-0 kubenswrapper[28504]: I0318 13:24:43.911609 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-audit-policies\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.012752 master-0 kubenswrapper[28504]: I0318 13:24:44.012635 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.012752 master-0 kubenswrapper[28504]: I0318 13:24:44.012684 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.012752 master-0 kubenswrapper[28504]: I0318 13:24:44.012714 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.012752 master-0 kubenswrapper[28504]: I0318 13:24:44.012737 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.013040 master-0 kubenswrapper[28504]: I0318 13:24:44.012761 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-error\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.013040 master-0 kubenswrapper[28504]: I0318 13:24:44.012784 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.013040 master-0 kubenswrapper[28504]: I0318 13:24:44.012824 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.013040 master-0 kubenswrapper[28504]: I0318 13:24:44.012857 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-audit-policies\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.013040 master-0 kubenswrapper[28504]: I0318 13:24:44.012895 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-session\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.013040 master-0 kubenswrapper[28504]: I0318 13:24:44.012968 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d9d797e5-c145-4658-9318-06ee1106173f-audit-dir\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.013040 master-0 kubenswrapper[28504]: I0318 13:24:44.013010 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-login\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.013040 master-0 kubenswrapper[28504]: I0318 13:24:44.013036 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.013264 master-0 kubenswrapper[28504]: I0318 13:24:44.013062 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzblb\" (UniqueName: \"kubernetes.io/projected/d9d797e5-c145-4658-9318-06ee1106173f-kube-api-access-tzblb\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.014744 master-0 kubenswrapper[28504]: E0318 13:24:44.013442 28504 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 18 13:24:44.014744 master-0 kubenswrapper[28504]: E0318 13:24:44.013491 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-cliconfig podName:d9d797e5-c145-4658-9318-06ee1106173f nodeName:}" failed. No retries permitted until 2026-03-18 13:24:44.513476363 +0000 UTC m=+62.008282128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-cliconfig") pod "oauth-openshift-6b4f4557cc-rmfv5" (UID: "d9d797e5-c145-4658-9318-06ee1106173f") : configmap "v4-0-config-system-cliconfig" not found Mar 18 13:24:44.014744 master-0 kubenswrapper[28504]: I0318 13:24:44.014185 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.014744 master-0 kubenswrapper[28504]: I0318 13:24:44.014239 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-audit-policies\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.015235 master-0 kubenswrapper[28504]: I0318 13:24:44.015171 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d9d797e5-c145-4658-9318-06ee1106173f-audit-dir\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.030083 master-0 kubenswrapper[28504]: I0318 13:24:44.028978 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-error\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.030083 master-0 kubenswrapper[28504]: I0318 13:24:44.029111 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-login\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.030083 master-0 kubenswrapper[28504]: I0318 13:24:44.029293 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-session\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.030083 master-0 kubenswrapper[28504]: I0318 13:24:44.029447 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.030083 master-0 kubenswrapper[28504]: I0318 13:24:44.029452 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.030083 master-0 kubenswrapper[28504]: I0318 13:24:44.029824 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.038761 master-0 kubenswrapper[28504]: I0318 13:24:44.038716 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.040391 master-0 kubenswrapper[28504]: I0318 13:24:44.040057 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.053759 master-0 kubenswrapper[28504]: I0318 13:24:44.053680 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzblb\" (UniqueName: \"kubernetes.io/projected/d9d797e5-c145-4658-9318-06ee1106173f-kube-api-access-tzblb\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.520277 master-0 kubenswrapper[28504]: I0318 13:24:44.520218 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:44.520509 master-0 kubenswrapper[28504]: E0318 13:24:44.520379 28504 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 18 13:24:44.520509 master-0 kubenswrapper[28504]: E0318 13:24:44.520441 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-cliconfig podName:d9d797e5-c145-4658-9318-06ee1106173f nodeName:}" failed. No retries permitted until 2026-03-18 13:24:45.520424602 +0000 UTC m=+63.015230377 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-cliconfig") pod "oauth-openshift-6b4f4557cc-rmfv5" (UID: "d9d797e5-c145-4658-9318-06ee1106173f") : configmap "v4-0-config-system-cliconfig" not found Mar 18 13:24:44.737156 master-0 kubenswrapper[28504]: I0318 13:24:44.737080 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-574cc54585-w6425" event={"ID":"300098ac-781e-48e5-bbab-4c1009ecf6a2","Type":"ContainerStarted","Data":"6cdb45dc081f25ff36f1f88e3884564223314cd6dbd048929e199190da55e629"} Mar 18 13:24:45.535991 master-0 kubenswrapper[28504]: I0318 13:24:45.535898 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:45.536515 master-0 kubenswrapper[28504]: E0318 13:24:45.536058 28504 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 18 13:24:45.536515 master-0 kubenswrapper[28504]: E0318 13:24:45.536174 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-cliconfig podName:d9d797e5-c145-4658-9318-06ee1106173f nodeName:}" failed. No retries permitted until 2026-03-18 13:24:47.536147911 +0000 UTC m=+65.030953696 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-cliconfig") pod "oauth-openshift-6b4f4557cc-rmfv5" (UID: "d9d797e5-c145-4658-9318-06ee1106173f") : configmap "v4-0-config-system-cliconfig" not found Mar 18 13:24:46.761982 master-0 kubenswrapper[28504]: I0318 13:24:46.761914 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-574cc54585-w6425" Mar 18 13:24:46.762666 master-0 kubenswrapper[28504]: I0318 13:24:46.761983 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-574cc54585-w6425" event={"ID":"300098ac-781e-48e5-bbab-4c1009ecf6a2","Type":"ContainerStarted","Data":"738698b53def08708508618f14d1ca54e250f14d774ee3a024774bfbd9d79aa8"} Mar 18 13:24:46.762666 master-0 kubenswrapper[28504]: I0318 13:24:46.762054 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-574cc54585-w6425" Mar 18 13:24:46.777315 master-0 kubenswrapper[28504]: I0318 13:24:46.777240 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-574cc54585-w6425" podStartSLOduration=2.471152162 podStartE2EDuration="4.777222767s" podCreationTimestamp="2026-03-18 13:24:42 +0000 UTC" firstStartedPulling="2026-03-18 13:24:43.759424757 +0000 UTC m=+61.254230522" lastFinishedPulling="2026-03-18 13:24:46.065495352 +0000 UTC m=+63.560301127" observedRunningTime="2026-03-18 13:24:46.776313311 +0000 UTC m=+64.271119086" watchObservedRunningTime="2026-03-18 13:24:46.777222767 +0000 UTC m=+64.272028542" Mar 18 13:24:47.562474 master-0 kubenswrapper[28504]: I0318 13:24:47.562403 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b4f4557cc-rmfv5\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:47.562717 master-0 kubenswrapper[28504]: E0318 13:24:47.562632 28504 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 18 13:24:47.562717 master-0 kubenswrapper[28504]: E0318 13:24:47.562691 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-cliconfig podName:d9d797e5-c145-4658-9318-06ee1106173f nodeName:}" failed. No retries permitted until 2026-03-18 13:24:51.562671999 +0000 UTC m=+69.057477774 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-cliconfig") pod "oauth-openshift-6b4f4557cc-rmfv5" (UID: "d9d797e5-c145-4658-9318-06ee1106173f") : configmap "v4-0-config-system-cliconfig" not found Mar 18 13:24:49.337206 master-0 kubenswrapper[28504]: I0318 13:24:49.336671 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5"] Mar 18 13:24:49.337837 master-0 kubenswrapper[28504]: E0318 13:24:49.337241 28504 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[v4-0-config-system-cliconfig], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" podUID="d9d797e5-c145-4658-9318-06ee1106173f" Mar 18 13:24:49.769868 master-0 kubenswrapper[28504]: I0318 13:24:49.769791 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:49.780784 master-0 kubenswrapper[28504]: I0318 13:24:49.780735 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:49.898231 master-0 kubenswrapper[28504]: I0318 13:24:49.898166 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-error\") pod \"d9d797e5-c145-4658-9318-06ee1106173f\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " Mar 18 13:24:49.898710 master-0 kubenswrapper[28504]: I0318 13:24:49.898686 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-ocp-branding-template\") pod \"d9d797e5-c145-4658-9318-06ee1106173f\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " Mar 18 13:24:49.898783 master-0 kubenswrapper[28504]: I0318 13:24:49.898739 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzblb\" (UniqueName: \"kubernetes.io/projected/d9d797e5-c145-4658-9318-06ee1106173f-kube-api-access-tzblb\") pod \"d9d797e5-c145-4658-9318-06ee1106173f\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " Mar 18 13:24:49.898783 master-0 kubenswrapper[28504]: I0318 13:24:49.898767 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-session\") pod \"d9d797e5-c145-4658-9318-06ee1106173f\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " Mar 18 13:24:49.898875 master-0 kubenswrapper[28504]: I0318 13:24:49.898819 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-provider-selection\") pod \"d9d797e5-c145-4658-9318-06ee1106173f\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " Mar 18 13:24:49.898875 master-0 kubenswrapper[28504]: I0318 13:24:49.898854 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-trusted-ca-bundle\") pod \"d9d797e5-c145-4658-9318-06ee1106173f\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " Mar 18 13:24:49.898984 master-0 kubenswrapper[28504]: I0318 13:24:49.898897 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-serving-cert\") pod \"d9d797e5-c145-4658-9318-06ee1106173f\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " Mar 18 13:24:49.899046 master-0 kubenswrapper[28504]: I0318 13:24:49.898980 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-router-certs\") pod \"d9d797e5-c145-4658-9318-06ee1106173f\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " Mar 18 13:24:49.899046 master-0 kubenswrapper[28504]: I0318 13:24:49.899012 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d9d797e5-c145-4658-9318-06ee1106173f-audit-dir\") pod \"d9d797e5-c145-4658-9318-06ee1106173f\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " Mar 18 13:24:49.899046 master-0 kubenswrapper[28504]: I0318 13:24:49.899040 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-login\") pod \"d9d797e5-c145-4658-9318-06ee1106173f\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " Mar 18 13:24:49.899161 master-0 kubenswrapper[28504]: I0318 13:24:49.899082 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-audit-policies\") pod \"d9d797e5-c145-4658-9318-06ee1106173f\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " Mar 18 13:24:49.899161 master-0 kubenswrapper[28504]: I0318 13:24:49.899116 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9d797e5-c145-4658-9318-06ee1106173f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d9d797e5-c145-4658-9318-06ee1106173f" (UID: "d9d797e5-c145-4658-9318-06ee1106173f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:24:49.899237 master-0 kubenswrapper[28504]: I0318 13:24:49.899165 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-service-ca\") pod \"d9d797e5-c145-4658-9318-06ee1106173f\" (UID: \"d9d797e5-c145-4658-9318-06ee1106173f\") " Mar 18 13:24:49.899526 master-0 kubenswrapper[28504]: I0318 13:24:49.899500 28504 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d9d797e5-c145-4658-9318-06ee1106173f-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:49.900210 master-0 kubenswrapper[28504]: I0318 13:24:49.900008 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d9d797e5-c145-4658-9318-06ee1106173f" (UID: "d9d797e5-c145-4658-9318-06ee1106173f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:24:49.900210 master-0 kubenswrapper[28504]: I0318 13:24:49.900072 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d9d797e5-c145-4658-9318-06ee1106173f" (UID: "d9d797e5-c145-4658-9318-06ee1106173f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:24:49.900426 master-0 kubenswrapper[28504]: I0318 13:24:49.900381 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d9d797e5-c145-4658-9318-06ee1106173f" (UID: "d9d797e5-c145-4658-9318-06ee1106173f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:24:49.901912 master-0 kubenswrapper[28504]: I0318 13:24:49.901102 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d9d797e5-c145-4658-9318-06ee1106173f" (UID: "d9d797e5-c145-4658-9318-06ee1106173f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:24:49.901912 master-0 kubenswrapper[28504]: I0318 13:24:49.901686 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d797e5-c145-4658-9318-06ee1106173f-kube-api-access-tzblb" (OuterVolumeSpecName: "kube-api-access-tzblb") pod "d9d797e5-c145-4658-9318-06ee1106173f" (UID: "d9d797e5-c145-4658-9318-06ee1106173f"). InnerVolumeSpecName "kube-api-access-tzblb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:24:49.902193 master-0 kubenswrapper[28504]: I0318 13:24:49.902143 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d9d797e5-c145-4658-9318-06ee1106173f" (UID: "d9d797e5-c145-4658-9318-06ee1106173f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:24:49.902339 master-0 kubenswrapper[28504]: I0318 13:24:49.902279 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d9d797e5-c145-4658-9318-06ee1106173f" (UID: "d9d797e5-c145-4658-9318-06ee1106173f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:24:49.902339 master-0 kubenswrapper[28504]: I0318 13:24:49.902314 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d9d797e5-c145-4658-9318-06ee1106173f" (UID: "d9d797e5-c145-4658-9318-06ee1106173f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:24:49.902540 master-0 kubenswrapper[28504]: I0318 13:24:49.902491 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d9d797e5-c145-4658-9318-06ee1106173f" (UID: "d9d797e5-c145-4658-9318-06ee1106173f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:24:49.903439 master-0 kubenswrapper[28504]: I0318 13:24:49.903402 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d9d797e5-c145-4658-9318-06ee1106173f" (UID: "d9d797e5-c145-4658-9318-06ee1106173f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:24:49.904591 master-0 kubenswrapper[28504]: I0318 13:24:49.904559 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d9d797e5-c145-4658-9318-06ee1106173f" (UID: "d9d797e5-c145-4658-9318-06ee1106173f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:24:50.000293 master-0 kubenswrapper[28504]: I0318 13:24:50.000252 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:50.000512 master-0 kubenswrapper[28504]: I0318 13:24:50.000497 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:50.000581 master-0 kubenswrapper[28504]: I0318 13:24:50.000568 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:50.000646 master-0 kubenswrapper[28504]: I0318 13:24:50.000633 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzblb\" (UniqueName: \"kubernetes.io/projected/d9d797e5-c145-4658-9318-06ee1106173f-kube-api-access-tzblb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:50.000706 master-0 kubenswrapper[28504]: I0318 13:24:50.000697 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:50.000766 master-0 kubenswrapper[28504]: I0318 13:24:50.000755 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:50.000831 master-0 kubenswrapper[28504]: I0318 13:24:50.000820 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:50.000898 master-0 kubenswrapper[28504]: I0318 13:24:50.000887 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:50.000995 master-0 kubenswrapper[28504]: I0318 13:24:50.000981 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:50.001090 master-0 kubenswrapper[28504]: I0318 13:24:50.001071 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:50.001179 master-0 kubenswrapper[28504]: I0318 13:24:50.001168 28504 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:50.774926 master-0 kubenswrapper[28504]: I0318 13:24:50.774885 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5" Mar 18 13:24:50.969515 master-0 kubenswrapper[28504]: I0318 13:24:50.969438 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7cbf579478-nnclj"] Mar 18 13:24:50.970633 master-0 kubenswrapper[28504]: I0318 13:24:50.970566 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:50.974208 master-0 kubenswrapper[28504]: I0318 13:24:50.972578 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 13:24:50.974208 master-0 kubenswrapper[28504]: I0318 13:24:50.973874 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 13:24:50.974451 master-0 kubenswrapper[28504]: I0318 13:24:50.974308 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 13:24:50.974451 master-0 kubenswrapper[28504]: I0318 13:24:50.974367 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 13:24:50.974558 master-0 kubenswrapper[28504]: I0318 13:24:50.974532 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 13:24:50.976217 master-0 kubenswrapper[28504]: I0318 13:24:50.974739 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 13:24:50.976217 master-0 kubenswrapper[28504]: I0318 13:24:50.974781 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 13:24:50.976217 master-0 kubenswrapper[28504]: I0318 13:24:50.974864 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 13:24:50.976217 master-0 kubenswrapper[28504]: I0318 13:24:50.974980 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-lqqf9" Mar 18 13:24:50.976217 master-0 kubenswrapper[28504]: I0318 13:24:50.975035 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 13:24:50.976217 master-0 kubenswrapper[28504]: I0318 13:24:50.974983 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 13:24:50.976217 master-0 kubenswrapper[28504]: I0318 13:24:50.975498 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 13:24:50.980712 master-0 kubenswrapper[28504]: I0318 13:24:50.978976 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5"] Mar 18 13:24:50.987109 master-0 kubenswrapper[28504]: I0318 13:24:50.984589 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 13:24:50.988207 master-0 kubenswrapper[28504]: I0318 13:24:50.985544 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-6b4f4557cc-rmfv5"] Mar 18 13:24:50.991496 master-0 kubenswrapper[28504]: I0318 13:24:50.991468 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 13:24:51.020037 master-0 kubenswrapper[28504]: I0318 13:24:51.016797 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7cbf579478-nnclj"] Mar 18 13:24:51.185260 master-0 kubenswrapper[28504]: I0318 13:24:51.185196 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.185260 master-0 kubenswrapper[28504]: I0318 13:24:51.185267 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4z4s\" (UniqueName: \"kubernetes.io/projected/096901d4-7e1a-4de8-b6b2-0acf03e98472-kube-api-access-c4z4s\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.185522 master-0 kubenswrapper[28504]: I0318 13:24:51.185301 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.185522 master-0 kubenswrapper[28504]: I0318 13:24:51.185342 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-error\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.185522 master-0 kubenswrapper[28504]: I0318 13:24:51.185369 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.185522 master-0 kubenswrapper[28504]: I0318 13:24:51.185416 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-audit-policies\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.185522 master-0 kubenswrapper[28504]: I0318 13:24:51.185438 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-session\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.185522 master-0 kubenswrapper[28504]: I0318 13:24:51.185478 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.185522 master-0 kubenswrapper[28504]: I0318 13:24:51.185519 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.185797 master-0 kubenswrapper[28504]: I0318 13:24:51.185608 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-login\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.185797 master-0 kubenswrapper[28504]: I0318 13:24:51.185643 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.185797 master-0 kubenswrapper[28504]: I0318 13:24:51.185672 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/096901d4-7e1a-4de8-b6b2-0acf03e98472-audit-dir\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.185797 master-0 kubenswrapper[28504]: I0318 13:24:51.185697 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.185797 master-0 kubenswrapper[28504]: I0318 13:24:51.185744 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d9d797e5-c145-4658-9318-06ee1106173f-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:51.287428 master-0 kubenswrapper[28504]: I0318 13:24:51.287305 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.287642 master-0 kubenswrapper[28504]: I0318 13:24:51.287627 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4z4s\" (UniqueName: \"kubernetes.io/projected/096901d4-7e1a-4de8-b6b2-0acf03e98472-kube-api-access-c4z4s\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.287843 master-0 kubenswrapper[28504]: I0318 13:24:51.287828 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.287988 master-0 kubenswrapper[28504]: I0318 13:24:51.287973 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-error\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.288161 master-0 kubenswrapper[28504]: I0318 13:24:51.288140 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.288283 master-0 kubenswrapper[28504]: I0318 13:24:51.288269 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-audit-policies\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.288373 master-0 kubenswrapper[28504]: I0318 13:24:51.288361 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-session\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.288470 master-0 kubenswrapper[28504]: I0318 13:24:51.288456 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.288962 master-0 kubenswrapper[28504]: I0318 13:24:51.288168 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.289025 master-0 kubenswrapper[28504]: I0318 13:24:51.288986 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-audit-policies\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.289025 master-0 kubenswrapper[28504]: I0318 13:24:51.288922 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.289189 master-0 kubenswrapper[28504]: I0318 13:24:51.289126 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-login\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.289348 master-0 kubenswrapper[28504]: I0318 13:24:51.289310 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.289389 master-0 kubenswrapper[28504]: I0318 13:24:51.289376 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.289591 master-0 kubenswrapper[28504]: I0318 13:24:51.289558 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/096901d4-7e1a-4de8-b6b2-0acf03e98472-audit-dir\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.289751 master-0 kubenswrapper[28504]: I0318 13:24:51.289724 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/096901d4-7e1a-4de8-b6b2-0acf03e98472-audit-dir\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.290014 master-0 kubenswrapper[28504]: I0318 13:24:51.289983 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.290606 master-0 kubenswrapper[28504]: I0318 13:24:51.290588 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.290904 master-0 kubenswrapper[28504]: I0318 13:24:51.290867 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-error\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.291198 master-0 kubenswrapper[28504]: I0318 13:24:51.291158 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-session\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.291536 master-0 kubenswrapper[28504]: I0318 13:24:51.291507 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.291819 master-0 kubenswrapper[28504]: I0318 13:24:51.291782 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-login\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.292042 master-0 kubenswrapper[28504]: I0318 13:24:51.292006 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.292850 master-0 kubenswrapper[28504]: I0318 13:24:51.292808 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.300464 master-0 kubenswrapper[28504]: I0318 13:24:51.300404 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.304948 master-0 kubenswrapper[28504]: I0318 13:24:51.304877 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4z4s\" (UniqueName: \"kubernetes.io/projected/096901d4-7e1a-4de8-b6b2-0acf03e98472-kube-api-access-c4z4s\") pod \"oauth-openshift-7cbf579478-nnclj\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:51.592473 master-0 kubenswrapper[28504]: I0318 13:24:51.592334 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:52.078497 master-0 kubenswrapper[28504]: I0318 13:24:52.078333 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7cbf579478-nnclj"] Mar 18 13:24:52.084291 master-0 kubenswrapper[28504]: W0318 13:24:52.084233 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod096901d4_7e1a_4de8_b6b2_0acf03e98472.slice/crio-4e2a264fd1049a285d9ab7a08ee511cf8934351ad13ece3fcd6dcfa4bb0512f7 WatchSource:0}: Error finding container 4e2a264fd1049a285d9ab7a08ee511cf8934351ad13ece3fcd6dcfa4bb0512f7: Status 404 returned error can't find the container with id 4e2a264fd1049a285d9ab7a08ee511cf8934351ad13ece3fcd6dcfa4bb0512f7 Mar 18 13:24:52.759005 master-0 kubenswrapper[28504]: I0318 13:24:52.758947 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9d797e5-c145-4658-9318-06ee1106173f" path="/var/lib/kubelet/pods/d9d797e5-c145-4658-9318-06ee1106173f/volumes" Mar 18 13:24:52.788466 master-0 kubenswrapper[28504]: I0318 13:24:52.788398 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" event={"ID":"096901d4-7e1a-4de8-b6b2-0acf03e98472","Type":"ContainerStarted","Data":"4e2a264fd1049a285d9ab7a08ee511cf8934351ad13ece3fcd6dcfa4bb0512f7"} Mar 18 13:24:55.077462 master-0 kubenswrapper[28504]: I0318 13:24:55.077405 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-7cbf579478-nnclj"] Mar 18 13:24:56.987984 master-0 kubenswrapper[28504]: I0318 13:24:56.987265 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:57.004479 master-0 kubenswrapper[28504]: I0318 13:24:56.990879 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:57.089359 master-0 kubenswrapper[28504]: I0318 13:24:57.088788 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access\") pod \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\" (UID: \"810ed1fb-bd32-4e5d-94e6-011f21ff37d3\") " Mar 18 13:24:57.091616 master-0 kubenswrapper[28504]: I0318 13:24:57.091552 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "810ed1fb-bd32-4e5d-94e6-011f21ff37d3" (UID: "810ed1fb-bd32-4e5d-94e6-011f21ff37d3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:24:57.236850 master-0 kubenswrapper[28504]: I0318 13:24:57.236774 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/810ed1fb-bd32-4e5d-94e6-011f21ff37d3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:58.266460 master-0 kubenswrapper[28504]: I0318 13:24:58.265386 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" event={"ID":"096901d4-7e1a-4de8-b6b2-0acf03e98472","Type":"ContainerStarted","Data":"9b2b3e896a922ee97e5f82a9f1c9bdfc013e4103eae660d131e8d231e759e084"} Mar 18 13:24:58.267071 master-0 kubenswrapper[28504]: I0318 13:24:58.267017 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:58.273751 master-0 kubenswrapper[28504]: I0318 13:24:58.273610 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:24:58.512515 master-0 kubenswrapper[28504]: I0318 13:24:58.512430 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" podStartSLOduration=4.791546167 podStartE2EDuration="9.512411161s" podCreationTimestamp="2026-03-18 13:24:49 +0000 UTC" firstStartedPulling="2026-03-18 13:24:52.086276289 +0000 UTC m=+69.581082064" lastFinishedPulling="2026-03-18 13:24:56.807141283 +0000 UTC m=+74.301947058" observedRunningTime="2026-03-18 13:24:58.450190547 +0000 UTC m=+75.944996322" watchObservedRunningTime="2026-03-18 13:24:58.512411161 +0000 UTC m=+76.007216946" Mar 18 13:24:58.769832 master-0 kubenswrapper[28504]: I0318 13:24:58.769762 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66b7876dbc-rdzrh"] Mar 18 13:24:58.770166 master-0 kubenswrapper[28504]: I0318 13:24:58.770051 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" podUID="a5a93d05-3c8e-4666-9a55-d8f9e902db78" containerName="controller-manager" containerID="cri-o://673862c17b9e84a9d59c686af6c0f638cfa8ae15c58a6c7387f904f4b2566d48" gracePeriod=30 Mar 18 13:24:58.786297 master-0 kubenswrapper[28504]: I0318 13:24:58.786087 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq"] Mar 18 13:24:58.786761 master-0 kubenswrapper[28504]: I0318 13:24:58.786391 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" podUID="65cfa12a-0711-4fba-8859-73a3f8f250a9" containerName="route-controller-manager" containerID="cri-o://63d70024e5607dd0f325c1dca25a80e4589b1a262a3d3f4834d611ea24de9a2b" gracePeriod=30 Mar 18 13:24:59.279948 master-0 kubenswrapper[28504]: I0318 13:24:59.279852 28504 generic.go:334] "Generic (PLEG): container finished" podID="65cfa12a-0711-4fba-8859-73a3f8f250a9" containerID="63d70024e5607dd0f325c1dca25a80e4589b1a262a3d3f4834d611ea24de9a2b" exitCode=0 Mar 18 13:24:59.280598 master-0 kubenswrapper[28504]: I0318 13:24:59.279882 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" event={"ID":"65cfa12a-0711-4fba-8859-73a3f8f250a9","Type":"ContainerDied","Data":"63d70024e5607dd0f325c1dca25a80e4589b1a262a3d3f4834d611ea24de9a2b"} Mar 18 13:24:59.280598 master-0 kubenswrapper[28504]: I0318 13:24:59.280065 28504 scope.go:117] "RemoveContainer" containerID="8a450d61a86ca02f43befd316491f266f23f5f89125343df32e08e9b38e85140" Mar 18 13:24:59.282341 master-0 kubenswrapper[28504]: I0318 13:24:59.282312 28504 generic.go:334] "Generic (PLEG): container finished" podID="a5a93d05-3c8e-4666-9a55-d8f9e902db78" containerID="673862c17b9e84a9d59c686af6c0f638cfa8ae15c58a6c7387f904f4b2566d48" exitCode=0 Mar 18 13:24:59.282413 master-0 kubenswrapper[28504]: I0318 13:24:59.282348 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" event={"ID":"a5a93d05-3c8e-4666-9a55-d8f9e902db78","Type":"ContainerDied","Data":"673862c17b9e84a9d59c686af6c0f638cfa8ae15c58a6c7387f904f4b2566d48"} Mar 18 13:24:59.349601 master-0 kubenswrapper[28504]: I0318 13:24:59.349555 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:24:59.353564 master-0 kubenswrapper[28504]: I0318 13:24:59.353408 28504 scope.go:117] "RemoveContainer" containerID="b10031bd90b55a9a696a81d72f5edb8059040095aa52e3160902d05b4a7cd6cf" Mar 18 13:24:59.362022 master-0 kubenswrapper[28504]: I0318 13:24:59.361229 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cfa12a-0711-4fba-8859-73a3f8f250a9-serving-cert\") pod \"65cfa12a-0711-4fba-8859-73a3f8f250a9\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " Mar 18 13:24:59.362022 master-0 kubenswrapper[28504]: I0318 13:24:59.361306 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhzcj\" (UniqueName: \"kubernetes.io/projected/65cfa12a-0711-4fba-8859-73a3f8f250a9-kube-api-access-xhzcj\") pod \"65cfa12a-0711-4fba-8859-73a3f8f250a9\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " Mar 18 13:24:59.362022 master-0 kubenswrapper[28504]: I0318 13:24:59.361330 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-client-ca\") pod \"65cfa12a-0711-4fba-8859-73a3f8f250a9\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " Mar 18 13:24:59.362022 master-0 kubenswrapper[28504]: I0318 13:24:59.361377 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-config\") pod \"65cfa12a-0711-4fba-8859-73a3f8f250a9\" (UID: \"65cfa12a-0711-4fba-8859-73a3f8f250a9\") " Mar 18 13:24:59.362414 master-0 kubenswrapper[28504]: I0318 13:24:59.362172 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-config" (OuterVolumeSpecName: "config") pod "65cfa12a-0711-4fba-8859-73a3f8f250a9" (UID: "65cfa12a-0711-4fba-8859-73a3f8f250a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:24:59.362414 master-0 kubenswrapper[28504]: I0318 13:24:59.362193 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "65cfa12a-0711-4fba-8859-73a3f8f250a9" (UID: "65cfa12a-0711-4fba-8859-73a3f8f250a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:24:59.365143 master-0 kubenswrapper[28504]: I0318 13:24:59.365068 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65cfa12a-0711-4fba-8859-73a3f8f250a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "65cfa12a-0711-4fba-8859-73a3f8f250a9" (UID: "65cfa12a-0711-4fba-8859-73a3f8f250a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:24:59.376317 master-0 kubenswrapper[28504]: I0318 13:24:59.376240 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65cfa12a-0711-4fba-8859-73a3f8f250a9-kube-api-access-xhzcj" (OuterVolumeSpecName: "kube-api-access-xhzcj") pod "65cfa12a-0711-4fba-8859-73a3f8f250a9" (UID: "65cfa12a-0711-4fba-8859-73a3f8f250a9"). InnerVolumeSpecName "kube-api-access-xhzcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:24:59.414677 master-0 kubenswrapper[28504]: I0318 13:24:59.414564 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:24:59.463344 master-0 kubenswrapper[28504]: I0318 13:24:59.462913 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5a93d05-3c8e-4666-9a55-d8f9e902db78-serving-cert\") pod \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " Mar 18 13:24:59.463344 master-0 kubenswrapper[28504]: I0318 13:24:59.463002 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mthwt\" (UniqueName: \"kubernetes.io/projected/a5a93d05-3c8e-4666-9a55-d8f9e902db78-kube-api-access-mthwt\") pod \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " Mar 18 13:24:59.463344 master-0 kubenswrapper[28504]: I0318 13:24:59.463028 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-proxy-ca-bundles\") pod \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " Mar 18 13:24:59.463344 master-0 kubenswrapper[28504]: I0318 13:24:59.463093 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-config\") pod \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " Mar 18 13:24:59.463344 master-0 kubenswrapper[28504]: I0318 13:24:59.463175 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-client-ca\") pod \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\" (UID: \"a5a93d05-3c8e-4666-9a55-d8f9e902db78\") " Mar 18 13:24:59.463807 master-0 kubenswrapper[28504]: I0318 13:24:59.463396 28504 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cfa12a-0711-4fba-8859-73a3f8f250a9-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:59.463807 master-0 kubenswrapper[28504]: I0318 13:24:59.463415 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhzcj\" (UniqueName: \"kubernetes.io/projected/65cfa12a-0711-4fba-8859-73a3f8f250a9-kube-api-access-xhzcj\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:59.463807 master-0 kubenswrapper[28504]: I0318 13:24:59.463426 28504 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:59.463807 master-0 kubenswrapper[28504]: I0318 13:24:59.463436 28504 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cfa12a-0711-4fba-8859-73a3f8f250a9-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:59.464142 master-0 kubenswrapper[28504]: I0318 13:24:59.463993 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-client-ca" (OuterVolumeSpecName: "client-ca") pod "a5a93d05-3c8e-4666-9a55-d8f9e902db78" (UID: "a5a93d05-3c8e-4666-9a55-d8f9e902db78"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:24:59.464986 master-0 kubenswrapper[28504]: I0318 13:24:59.464956 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-config" (OuterVolumeSpecName: "config") pod "a5a93d05-3c8e-4666-9a55-d8f9e902db78" (UID: "a5a93d05-3c8e-4666-9a55-d8f9e902db78"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:24:59.465641 master-0 kubenswrapper[28504]: I0318 13:24:59.465592 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a5a93d05-3c8e-4666-9a55-d8f9e902db78" (UID: "a5a93d05-3c8e-4666-9a55-d8f9e902db78"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:24:59.467324 master-0 kubenswrapper[28504]: I0318 13:24:59.467268 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5a93d05-3c8e-4666-9a55-d8f9e902db78-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a5a93d05-3c8e-4666-9a55-d8f9e902db78" (UID: "a5a93d05-3c8e-4666-9a55-d8f9e902db78"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:24:59.467673 master-0 kubenswrapper[28504]: I0318 13:24:59.467623 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5a93d05-3c8e-4666-9a55-d8f9e902db78-kube-api-access-mthwt" (OuterVolumeSpecName: "kube-api-access-mthwt") pod "a5a93d05-3c8e-4666-9a55-d8f9e902db78" (UID: "a5a93d05-3c8e-4666-9a55-d8f9e902db78"). InnerVolumeSpecName "kube-api-access-mthwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:24:59.564732 master-0 kubenswrapper[28504]: I0318 13:24:59.564644 28504 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:59.564732 master-0 kubenswrapper[28504]: I0318 13:24:59.564717 28504 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:59.564732 master-0 kubenswrapper[28504]: I0318 13:24:59.564745 28504 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5a93d05-3c8e-4666-9a55-d8f9e902db78-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:59.565136 master-0 kubenswrapper[28504]: I0318 13:24:59.564757 28504 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5a93d05-3c8e-4666-9a55-d8f9e902db78-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:59.565136 master-0 kubenswrapper[28504]: I0318 13:24:59.564772 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mthwt\" (UniqueName: \"kubernetes.io/projected/a5a93d05-3c8e-4666-9a55-d8f9e902db78-kube-api-access-mthwt\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:00.289909 master-0 kubenswrapper[28504]: I0318 13:25:00.289852 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" event={"ID":"65cfa12a-0711-4fba-8859-73a3f8f250a9","Type":"ContainerDied","Data":"880004505fafdd74bc0fa1479c8dc9293b280d360df6bd0f451f11d33a5d6e7c"} Mar 18 13:25:00.289909 master-0 kubenswrapper[28504]: I0318 13:25:00.289908 28504 scope.go:117] "RemoveContainer" containerID="63d70024e5607dd0f325c1dca25a80e4589b1a262a3d3f4834d611ea24de9a2b" Mar 18 13:25:00.290462 master-0 kubenswrapper[28504]: I0318 13:25:00.289994 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq" Mar 18 13:25:00.295955 master-0 kubenswrapper[28504]: I0318 13:25:00.295486 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" event={"ID":"a5a93d05-3c8e-4666-9a55-d8f9e902db78","Type":"ContainerDied","Data":"2fb5e5e8607f93dafe9cc4e7936985507a00d052cc2ac3e0c096e4455936f109"} Mar 18 13:25:00.295955 master-0 kubenswrapper[28504]: I0318 13:25:00.295628 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b7876dbc-rdzrh" Mar 18 13:25:00.309808 master-0 kubenswrapper[28504]: I0318 13:25:00.309753 28504 scope.go:117] "RemoveContainer" containerID="673862c17b9e84a9d59c686af6c0f638cfa8ae15c58a6c7387f904f4b2566d48" Mar 18 13:25:00.340298 master-0 kubenswrapper[28504]: I0318 13:25:00.340189 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq"] Mar 18 13:25:00.348753 master-0 kubenswrapper[28504]: I0318 13:25:00.348692 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-597f7b4fd-fgxxq"] Mar 18 13:25:00.365958 master-0 kubenswrapper[28504]: I0318 13:25:00.365869 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66b7876dbc-rdzrh"] Mar 18 13:25:00.367600 master-0 kubenswrapper[28504]: I0318 13:25:00.367522 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-66b7876dbc-rdzrh"] Mar 18 13:25:00.640487 master-0 kubenswrapper[28504]: I0318 13:25:00.639617 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-787dc66b4f-w4rqb"] Mar 18 13:25:00.640487 master-0 kubenswrapper[28504]: E0318 13:25:00.639977 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cfa12a-0711-4fba-8859-73a3f8f250a9" containerName="route-controller-manager" Mar 18 13:25:00.640487 master-0 kubenswrapper[28504]: I0318 13:25:00.639991 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cfa12a-0711-4fba-8859-73a3f8f250a9" containerName="route-controller-manager" Mar 18 13:25:00.640487 master-0 kubenswrapper[28504]: E0318 13:25:00.640009 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5a93d05-3c8e-4666-9a55-d8f9e902db78" containerName="controller-manager" Mar 18 13:25:00.640487 master-0 kubenswrapper[28504]: I0318 13:25:00.640015 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5a93d05-3c8e-4666-9a55-d8f9e902db78" containerName="controller-manager" Mar 18 13:25:00.640487 master-0 kubenswrapper[28504]: E0318 13:25:00.640059 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cfa12a-0711-4fba-8859-73a3f8f250a9" containerName="route-controller-manager" Mar 18 13:25:00.640487 master-0 kubenswrapper[28504]: I0318 13:25:00.640068 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cfa12a-0711-4fba-8859-73a3f8f250a9" containerName="route-controller-manager" Mar 18 13:25:00.641415 master-0 kubenswrapper[28504]: I0318 13:25:00.641355 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5a93d05-3c8e-4666-9a55-d8f9e902db78" containerName="controller-manager" Mar 18 13:25:00.641479 master-0 kubenswrapper[28504]: I0318 13:25:00.641421 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cfa12a-0711-4fba-8859-73a3f8f250a9" containerName="route-controller-manager" Mar 18 13:25:00.641479 master-0 kubenswrapper[28504]: I0318 13:25:00.641434 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5a93d05-3c8e-4666-9a55-d8f9e902db78" containerName="controller-manager" Mar 18 13:25:00.641479 master-0 kubenswrapper[28504]: I0318 13:25:00.641476 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cfa12a-0711-4fba-8859-73a3f8f250a9" containerName="route-controller-manager" Mar 18 13:25:00.642413 master-0 kubenswrapper[28504]: I0318 13:25:00.642340 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:00.646643 master-0 kubenswrapper[28504]: I0318 13:25:00.643186 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph"] Mar 18 13:25:00.646643 master-0 kubenswrapper[28504]: E0318 13:25:00.643523 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5a93d05-3c8e-4666-9a55-d8f9e902db78" containerName="controller-manager" Mar 18 13:25:00.646643 master-0 kubenswrapper[28504]: I0318 13:25:00.643537 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5a93d05-3c8e-4666-9a55-d8f9e902db78" containerName="controller-manager" Mar 18 13:25:00.646643 master-0 kubenswrapper[28504]: I0318 13:25:00.644274 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" Mar 18 13:25:00.646643 master-0 kubenswrapper[28504]: I0318 13:25:00.644303 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:25:00.646643 master-0 kubenswrapper[28504]: I0318 13:25:00.645066 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:25:00.646643 master-0 kubenswrapper[28504]: I0318 13:25:00.645184 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:25:00.646643 master-0 kubenswrapper[28504]: I0318 13:25:00.645233 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:25:00.646643 master-0 kubenswrapper[28504]: I0318 13:25:00.645235 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-kw85n" Mar 18 13:25:00.646643 master-0 kubenswrapper[28504]: I0318 13:25:00.646528 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 13:25:00.647227 master-0 kubenswrapper[28504]: I0318 13:25:00.646654 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 13:25:00.647227 master-0 kubenswrapper[28504]: I0318 13:25:00.646689 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-xzmx4" Mar 18 13:25:00.647385 master-0 kubenswrapper[28504]: I0318 13:25:00.647352 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 13:25:00.647534 master-0 kubenswrapper[28504]: I0318 13:25:00.647504 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 13:25:00.647584 master-0 kubenswrapper[28504]: I0318 13:25:00.647570 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:25:00.647703 master-0 kubenswrapper[28504]: I0318 13:25:00.647683 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 13:25:00.659905 master-0 kubenswrapper[28504]: I0318 13:25:00.658714 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:25:00.679030 master-0 kubenswrapper[28504]: I0318 13:25:00.678930 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f2db637-3ec3-4648-9754-b88ebf6f3c0a-client-ca\") pod \"controller-manager-787dc66b4f-w4rqb\" (UID: \"6f2db637-3ec3-4648-9754-b88ebf6f3c0a\") " pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:00.679278 master-0 kubenswrapper[28504]: I0318 13:25:00.679260 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c9871d-1acf-4708-a9ec-5c580eee2d07-config\") pod \"route-controller-manager-55c4f6b8f5-shcph\" (UID: \"72c9871d-1acf-4708-a9ec-5c580eee2d07\") " pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" Mar 18 13:25:00.679384 master-0 kubenswrapper[28504]: I0318 13:25:00.679369 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c9871d-1acf-4708-a9ec-5c580eee2d07-serving-cert\") pod \"route-controller-manager-55c4f6b8f5-shcph\" (UID: \"72c9871d-1acf-4708-a9ec-5c580eee2d07\") " pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" Mar 18 13:25:00.679456 master-0 kubenswrapper[28504]: I0318 13:25:00.679442 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f2db637-3ec3-4648-9754-b88ebf6f3c0a-config\") pod \"controller-manager-787dc66b4f-w4rqb\" (UID: \"6f2db637-3ec3-4648-9754-b88ebf6f3c0a\") " pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:00.679527 master-0 kubenswrapper[28504]: I0318 13:25:00.679515 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72c9871d-1acf-4708-a9ec-5c580eee2d07-client-ca\") pod \"route-controller-manager-55c4f6b8f5-shcph\" (UID: \"72c9871d-1acf-4708-a9ec-5c580eee2d07\") " pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" Mar 18 13:25:00.679601 master-0 kubenswrapper[28504]: I0318 13:25:00.679588 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs9vd\" (UniqueName: \"kubernetes.io/projected/6f2db637-3ec3-4648-9754-b88ebf6f3c0a-kube-api-access-gs9vd\") pod \"controller-manager-787dc66b4f-w4rqb\" (UID: \"6f2db637-3ec3-4648-9754-b88ebf6f3c0a\") " pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:00.679695 master-0 kubenswrapper[28504]: I0318 13:25:00.679683 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c8l9\" (UniqueName: \"kubernetes.io/projected/72c9871d-1acf-4708-a9ec-5c580eee2d07-kube-api-access-7c8l9\") pod \"route-controller-manager-55c4f6b8f5-shcph\" (UID: \"72c9871d-1acf-4708-a9ec-5c580eee2d07\") " pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" Mar 18 13:25:00.679809 master-0 kubenswrapper[28504]: I0318 13:25:00.679794 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f2db637-3ec3-4648-9754-b88ebf6f3c0a-serving-cert\") pod \"controller-manager-787dc66b4f-w4rqb\" (UID: \"6f2db637-3ec3-4648-9754-b88ebf6f3c0a\") " pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:00.679982 master-0 kubenswrapper[28504]: I0318 13:25:00.679928 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f2db637-3ec3-4648-9754-b88ebf6f3c0a-proxy-ca-bundles\") pod \"controller-manager-787dc66b4f-w4rqb\" (UID: \"6f2db637-3ec3-4648-9754-b88ebf6f3c0a\") " pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:00.680124 master-0 kubenswrapper[28504]: I0318 13:25:00.680106 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-787dc66b4f-w4rqb"] Mar 18 13:25:00.681833 master-0 kubenswrapper[28504]: I0318 13:25:00.681785 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph"] Mar 18 13:25:00.757739 master-0 kubenswrapper[28504]: I0318 13:25:00.757687 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65cfa12a-0711-4fba-8859-73a3f8f250a9" path="/var/lib/kubelet/pods/65cfa12a-0711-4fba-8859-73a3f8f250a9/volumes" Mar 18 13:25:00.758566 master-0 kubenswrapper[28504]: I0318 13:25:00.758537 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5a93d05-3c8e-4666-9a55-d8f9e902db78" path="/var/lib/kubelet/pods/a5a93d05-3c8e-4666-9a55-d8f9e902db78/volumes" Mar 18 13:25:00.780988 master-0 kubenswrapper[28504]: I0318 13:25:00.780893 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f2db637-3ec3-4648-9754-b88ebf6f3c0a-proxy-ca-bundles\") pod \"controller-manager-787dc66b4f-w4rqb\" (UID: \"6f2db637-3ec3-4648-9754-b88ebf6f3c0a\") " pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:00.781199 master-0 kubenswrapper[28504]: I0318 13:25:00.781159 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f2db637-3ec3-4648-9754-b88ebf6f3c0a-client-ca\") pod \"controller-manager-787dc66b4f-w4rqb\" (UID: \"6f2db637-3ec3-4648-9754-b88ebf6f3c0a\") " pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:00.781241 master-0 kubenswrapper[28504]: I0318 13:25:00.781219 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c9871d-1acf-4708-a9ec-5c580eee2d07-config\") pod \"route-controller-manager-55c4f6b8f5-shcph\" (UID: \"72c9871d-1acf-4708-a9ec-5c580eee2d07\") " pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" Mar 18 13:25:00.781313 master-0 kubenswrapper[28504]: I0318 13:25:00.781285 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c9871d-1acf-4708-a9ec-5c580eee2d07-serving-cert\") pod \"route-controller-manager-55c4f6b8f5-shcph\" (UID: \"72c9871d-1acf-4708-a9ec-5c580eee2d07\") " pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" Mar 18 13:25:00.781368 master-0 kubenswrapper[28504]: I0318 13:25:00.781321 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f2db637-3ec3-4648-9754-b88ebf6f3c0a-config\") pod \"controller-manager-787dc66b4f-w4rqb\" (UID: \"6f2db637-3ec3-4648-9754-b88ebf6f3c0a\") " pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:00.781406 master-0 kubenswrapper[28504]: I0318 13:25:00.781376 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72c9871d-1acf-4708-a9ec-5c580eee2d07-client-ca\") pod \"route-controller-manager-55c4f6b8f5-shcph\" (UID: \"72c9871d-1acf-4708-a9ec-5c580eee2d07\") " pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" Mar 18 13:25:00.781439 master-0 kubenswrapper[28504]: I0318 13:25:00.781424 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs9vd\" (UniqueName: \"kubernetes.io/projected/6f2db637-3ec3-4648-9754-b88ebf6f3c0a-kube-api-access-gs9vd\") pod \"controller-manager-787dc66b4f-w4rqb\" (UID: \"6f2db637-3ec3-4648-9754-b88ebf6f3c0a\") " pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:00.781495 master-0 kubenswrapper[28504]: I0318 13:25:00.781471 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c8l9\" (UniqueName: \"kubernetes.io/projected/72c9871d-1acf-4708-a9ec-5c580eee2d07-kube-api-access-7c8l9\") pod \"route-controller-manager-55c4f6b8f5-shcph\" (UID: \"72c9871d-1acf-4708-a9ec-5c580eee2d07\") " pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" Mar 18 13:25:00.781614 master-0 kubenswrapper[28504]: I0318 13:25:00.781586 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f2db637-3ec3-4648-9754-b88ebf6f3c0a-serving-cert\") pod \"controller-manager-787dc66b4f-w4rqb\" (UID: \"6f2db637-3ec3-4648-9754-b88ebf6f3c0a\") " pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:00.782719 master-0 kubenswrapper[28504]: I0318 13:25:00.782589 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72c9871d-1acf-4708-a9ec-5c580eee2d07-client-ca\") pod \"route-controller-manager-55c4f6b8f5-shcph\" (UID: \"72c9871d-1acf-4708-a9ec-5c580eee2d07\") " pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" Mar 18 13:25:00.782878 master-0 kubenswrapper[28504]: I0318 13:25:00.782725 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f2db637-3ec3-4648-9754-b88ebf6f3c0a-config\") pod \"controller-manager-787dc66b4f-w4rqb\" (UID: \"6f2db637-3ec3-4648-9754-b88ebf6f3c0a\") " pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:00.783081 master-0 kubenswrapper[28504]: I0318 13:25:00.783058 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f2db637-3ec3-4648-9754-b88ebf6f3c0a-client-ca\") pod \"controller-manager-787dc66b4f-w4rqb\" (UID: \"6f2db637-3ec3-4648-9754-b88ebf6f3c0a\") " pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:00.784096 master-0 kubenswrapper[28504]: I0318 13:25:00.783475 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f2db637-3ec3-4648-9754-b88ebf6f3c0a-proxy-ca-bundles\") pod \"controller-manager-787dc66b4f-w4rqb\" (UID: \"6f2db637-3ec3-4648-9754-b88ebf6f3c0a\") " pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:00.784096 master-0 kubenswrapper[28504]: I0318 13:25:00.783468 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c9871d-1acf-4708-a9ec-5c580eee2d07-config\") pod \"route-controller-manager-55c4f6b8f5-shcph\" (UID: \"72c9871d-1acf-4708-a9ec-5c580eee2d07\") " pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" Mar 18 13:25:00.786703 master-0 kubenswrapper[28504]: I0318 13:25:00.786666 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f2db637-3ec3-4648-9754-b88ebf6f3c0a-serving-cert\") pod \"controller-manager-787dc66b4f-w4rqb\" (UID: \"6f2db637-3ec3-4648-9754-b88ebf6f3c0a\") " pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:00.788258 master-0 kubenswrapper[28504]: I0318 13:25:00.788225 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c9871d-1acf-4708-a9ec-5c580eee2d07-serving-cert\") pod \"route-controller-manager-55c4f6b8f5-shcph\" (UID: \"72c9871d-1acf-4708-a9ec-5c580eee2d07\") " pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" Mar 18 13:25:00.804816 master-0 kubenswrapper[28504]: I0318 13:25:00.804754 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c8l9\" (UniqueName: \"kubernetes.io/projected/72c9871d-1acf-4708-a9ec-5c580eee2d07-kube-api-access-7c8l9\") pod \"route-controller-manager-55c4f6b8f5-shcph\" (UID: \"72c9871d-1acf-4708-a9ec-5c580eee2d07\") " pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" Mar 18 13:25:00.805241 master-0 kubenswrapper[28504]: I0318 13:25:00.805159 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs9vd\" (UniqueName: \"kubernetes.io/projected/6f2db637-3ec3-4648-9754-b88ebf6f3c0a-kube-api-access-gs9vd\") pod \"controller-manager-787dc66b4f-w4rqb\" (UID: \"6f2db637-3ec3-4648-9754-b88ebf6f3c0a\") " pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:00.965329 master-0 kubenswrapper[28504]: I0318 13:25:00.965260 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:01.061968 master-0 kubenswrapper[28504]: I0318 13:25:01.058037 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" Mar 18 13:25:01.478008 master-0 kubenswrapper[28504]: I0318 13:25:01.476240 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-787dc66b4f-w4rqb"] Mar 18 13:25:01.488365 master-0 kubenswrapper[28504]: W0318 13:25:01.488258 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f2db637_3ec3_4648_9754_b88ebf6f3c0a.slice/crio-5a7e25384ac6c2378d42f70a1672d41a20242e379b324338b26127ead55d2dd2 WatchSource:0}: Error finding container 5a7e25384ac6c2378d42f70a1672d41a20242e379b324338b26127ead55d2dd2: Status 404 returned error can't find the container with id 5a7e25384ac6c2378d42f70a1672d41a20242e379b324338b26127ead55d2dd2 Mar 18 13:25:01.743738 master-0 kubenswrapper[28504]: I0318 13:25:01.742810 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph"] Mar 18 13:25:01.748682 master-0 kubenswrapper[28504]: W0318 13:25:01.748636 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72c9871d_1acf_4708_a9ec_5c580eee2d07.slice/crio-129aa9bdb69e190b3d1b087d4ee91f9363f55771d310cc778c7075b35a4d6634 WatchSource:0}: Error finding container 129aa9bdb69e190b3d1b087d4ee91f9363f55771d310cc778c7075b35a4d6634: Status 404 returned error can't find the container with id 129aa9bdb69e190b3d1b087d4ee91f9363f55771d310cc778c7075b35a4d6634 Mar 18 13:25:02.318183 master-0 kubenswrapper[28504]: I0318 13:25:02.318125 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" event={"ID":"72c9871d-1acf-4708-a9ec-5c580eee2d07","Type":"ContainerStarted","Data":"5d3c21d0b18ea05a68b35b7b2f1ad50f2b3ccdf75bd9497b5ae5e3d98635c06a"} Mar 18 13:25:02.318183 master-0 kubenswrapper[28504]: I0318 13:25:02.318183 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" event={"ID":"72c9871d-1acf-4708-a9ec-5c580eee2d07","Type":"ContainerStarted","Data":"129aa9bdb69e190b3d1b087d4ee91f9363f55771d310cc778c7075b35a4d6634"} Mar 18 13:25:02.318631 master-0 kubenswrapper[28504]: I0318 13:25:02.318583 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" Mar 18 13:25:02.320794 master-0 kubenswrapper[28504]: I0318 13:25:02.319624 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" event={"ID":"6f2db637-3ec3-4648-9754-b88ebf6f3c0a","Type":"ContainerStarted","Data":"4003e063645d5d8a19f86c61287c2ab0eacd6c23f09831b5c992fc810e8db1d0"} Mar 18 13:25:02.320794 master-0 kubenswrapper[28504]: I0318 13:25:02.319673 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" event={"ID":"6f2db637-3ec3-4648-9754-b88ebf6f3c0a","Type":"ContainerStarted","Data":"5a7e25384ac6c2378d42f70a1672d41a20242e379b324338b26127ead55d2dd2"} Mar 18 13:25:02.320794 master-0 kubenswrapper[28504]: I0318 13:25:02.320114 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:02.326402 master-0 kubenswrapper[28504]: I0318 13:25:02.326365 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" Mar 18 13:25:02.327453 master-0 kubenswrapper[28504]: I0318 13:25:02.327327 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" Mar 18 13:25:02.534024 master-0 kubenswrapper[28504]: I0318 13:25:02.533961 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55c4f6b8f5-shcph" podStartSLOduration=4.53392572 podStartE2EDuration="4.53392572s" podCreationTimestamp="2026-03-18 13:24:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:25:02.53109029 +0000 UTC m=+80.025896095" watchObservedRunningTime="2026-03-18 13:25:02.53392572 +0000 UTC m=+80.028731485" Mar 18 13:25:02.620128 master-0 kubenswrapper[28504]: I0318 13:25:02.619900 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-787dc66b4f-w4rqb" podStartSLOduration=4.6198696439999996 podStartE2EDuration="4.619869644s" podCreationTimestamp="2026-03-18 13:24:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:25:02.610380736 +0000 UTC m=+80.105186531" watchObservedRunningTime="2026-03-18 13:25:02.619869644 +0000 UTC m=+80.114675439" Mar 18 13:25:05.700979 master-0 kubenswrapper[28504]: I0318 13:25:05.700887 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-74d52"] Mar 18 13:25:05.702105 master-0 kubenswrapper[28504]: I0318 13:25:05.702071 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-74d52" Mar 18 13:25:05.711134 master-0 kubenswrapper[28504]: I0318 13:25:05.711080 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 18 13:25:05.711134 master-0 kubenswrapper[28504]: I0318 13:25:05.711107 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-5l4kz" Mar 18 13:25:05.711554 master-0 kubenswrapper[28504]: I0318 13:25:05.711303 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 18 13:25:05.711554 master-0 kubenswrapper[28504]: I0318 13:25:05.711461 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 18 13:25:05.711651 master-0 kubenswrapper[28504]: I0318 13:25:05.711629 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 18 13:25:05.724790 master-0 kubenswrapper[28504]: I0318 13:25:05.724725 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 18 13:25:05.869133 master-0 kubenswrapper[28504]: I0318 13:25:05.869047 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cknnr\" (UniqueName: \"kubernetes.io/projected/8ca88a33-ec5e-415c-b976-cfb6ddfe7da4-kube-api-access-cknnr\") pod \"console-operator-76b6568d85-74d52\" (UID: \"8ca88a33-ec5e-415c-b976-cfb6ddfe7da4\") " pod="openshift-console-operator/console-operator-76b6568d85-74d52" Mar 18 13:25:05.869365 master-0 kubenswrapper[28504]: I0318 13:25:05.869322 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ca88a33-ec5e-415c-b976-cfb6ddfe7da4-serving-cert\") pod \"console-operator-76b6568d85-74d52\" (UID: \"8ca88a33-ec5e-415c-b976-cfb6ddfe7da4\") " pod="openshift-console-operator/console-operator-76b6568d85-74d52" Mar 18 13:25:05.869529 master-0 kubenswrapper[28504]: I0318 13:25:05.869495 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ca88a33-ec5e-415c-b976-cfb6ddfe7da4-config\") pod \"console-operator-76b6568d85-74d52\" (UID: \"8ca88a33-ec5e-415c-b976-cfb6ddfe7da4\") " pod="openshift-console-operator/console-operator-76b6568d85-74d52" Mar 18 13:25:05.869700 master-0 kubenswrapper[28504]: I0318 13:25:05.869677 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ca88a33-ec5e-415c-b976-cfb6ddfe7da4-trusted-ca\") pod \"console-operator-76b6568d85-74d52\" (UID: \"8ca88a33-ec5e-415c-b976-cfb6ddfe7da4\") " pod="openshift-console-operator/console-operator-76b6568d85-74d52" Mar 18 13:25:05.872652 master-0 kubenswrapper[28504]: I0318 13:25:05.872593 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-74d52"] Mar 18 13:25:05.971076 master-0 kubenswrapper[28504]: I0318 13:25:05.970953 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ca88a33-ec5e-415c-b976-cfb6ddfe7da4-config\") pod \"console-operator-76b6568d85-74d52\" (UID: \"8ca88a33-ec5e-415c-b976-cfb6ddfe7da4\") " pod="openshift-console-operator/console-operator-76b6568d85-74d52" Mar 18 13:25:05.971076 master-0 kubenswrapper[28504]: I0318 13:25:05.971033 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ca88a33-ec5e-415c-b976-cfb6ddfe7da4-trusted-ca\") pod \"console-operator-76b6568d85-74d52\" (UID: \"8ca88a33-ec5e-415c-b976-cfb6ddfe7da4\") " pod="openshift-console-operator/console-operator-76b6568d85-74d52" Mar 18 13:25:05.971324 master-0 kubenswrapper[28504]: I0318 13:25:05.971090 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cknnr\" (UniqueName: \"kubernetes.io/projected/8ca88a33-ec5e-415c-b976-cfb6ddfe7da4-kube-api-access-cknnr\") pod \"console-operator-76b6568d85-74d52\" (UID: \"8ca88a33-ec5e-415c-b976-cfb6ddfe7da4\") " pod="openshift-console-operator/console-operator-76b6568d85-74d52" Mar 18 13:25:05.971324 master-0 kubenswrapper[28504]: I0318 13:25:05.971121 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ca88a33-ec5e-415c-b976-cfb6ddfe7da4-serving-cert\") pod \"console-operator-76b6568d85-74d52\" (UID: \"8ca88a33-ec5e-415c-b976-cfb6ddfe7da4\") " pod="openshift-console-operator/console-operator-76b6568d85-74d52" Mar 18 13:25:05.972262 master-0 kubenswrapper[28504]: I0318 13:25:05.972226 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ca88a33-ec5e-415c-b976-cfb6ddfe7da4-config\") pod \"console-operator-76b6568d85-74d52\" (UID: \"8ca88a33-ec5e-415c-b976-cfb6ddfe7da4\") " pod="openshift-console-operator/console-operator-76b6568d85-74d52" Mar 18 13:25:05.972923 master-0 kubenswrapper[28504]: I0318 13:25:05.972873 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ca88a33-ec5e-415c-b976-cfb6ddfe7da4-trusted-ca\") pod \"console-operator-76b6568d85-74d52\" (UID: \"8ca88a33-ec5e-415c-b976-cfb6ddfe7da4\") " pod="openshift-console-operator/console-operator-76b6568d85-74d52" Mar 18 13:25:05.988507 master-0 kubenswrapper[28504]: I0318 13:25:05.988436 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ca88a33-ec5e-415c-b976-cfb6ddfe7da4-serving-cert\") pod \"console-operator-76b6568d85-74d52\" (UID: \"8ca88a33-ec5e-415c-b976-cfb6ddfe7da4\") " pod="openshift-console-operator/console-operator-76b6568d85-74d52" Mar 18 13:25:06.000752 master-0 kubenswrapper[28504]: I0318 13:25:06.000688 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cknnr\" (UniqueName: \"kubernetes.io/projected/8ca88a33-ec5e-415c-b976-cfb6ddfe7da4-kube-api-access-cknnr\") pod \"console-operator-76b6568d85-74d52\" (UID: \"8ca88a33-ec5e-415c-b976-cfb6ddfe7da4\") " pod="openshift-console-operator/console-operator-76b6568d85-74d52" Mar 18 13:25:06.002605 master-0 kubenswrapper[28504]: I0318 13:25:06.002558 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-m9wjm"] Mar 18 13:25:06.003450 master-0 kubenswrapper[28504]: I0318 13:25:06.003419 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m9wjm" Mar 18 13:25:06.005736 master-0 kubenswrapper[28504]: I0318 13:25:06.005682 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 13:25:06.005736 master-0 kubenswrapper[28504]: I0318 13:25:06.005727 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 13:25:06.005842 master-0 kubenswrapper[28504]: I0318 13:25:06.005738 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-9rpkw" Mar 18 13:25:06.006042 master-0 kubenswrapper[28504]: I0318 13:25:06.006015 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 13:25:06.013216 master-0 kubenswrapper[28504]: I0318 13:25:06.013170 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m9wjm"] Mar 18 13:25:06.031354 master-0 kubenswrapper[28504]: I0318 13:25:06.031298 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-74d52" Mar 18 13:25:06.073121 master-0 kubenswrapper[28504]: I0318 13:25:06.073046 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwn9w\" (UniqueName: \"kubernetes.io/projected/5cffbdee-d63b-457e-8610-e880c787c9b4-kube-api-access-cwn9w\") pod \"ingress-canary-m9wjm\" (UID: \"5cffbdee-d63b-457e-8610-e880c787c9b4\") " pod="openshift-ingress-canary/ingress-canary-m9wjm" Mar 18 13:25:06.073121 master-0 kubenswrapper[28504]: I0318 13:25:06.073115 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cffbdee-d63b-457e-8610-e880c787c9b4-cert\") pod \"ingress-canary-m9wjm\" (UID: \"5cffbdee-d63b-457e-8610-e880c787c9b4\") " pod="openshift-ingress-canary/ingress-canary-m9wjm" Mar 18 13:25:06.176517 master-0 kubenswrapper[28504]: I0318 13:25:06.173968 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwn9w\" (UniqueName: \"kubernetes.io/projected/5cffbdee-d63b-457e-8610-e880c787c9b4-kube-api-access-cwn9w\") pod \"ingress-canary-m9wjm\" (UID: \"5cffbdee-d63b-457e-8610-e880c787c9b4\") " pod="openshift-ingress-canary/ingress-canary-m9wjm" Mar 18 13:25:06.176517 master-0 kubenswrapper[28504]: I0318 13:25:06.174029 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cffbdee-d63b-457e-8610-e880c787c9b4-cert\") pod \"ingress-canary-m9wjm\" (UID: \"5cffbdee-d63b-457e-8610-e880c787c9b4\") " pod="openshift-ingress-canary/ingress-canary-m9wjm" Mar 18 13:25:06.186185 master-0 kubenswrapper[28504]: I0318 13:25:06.186124 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cffbdee-d63b-457e-8610-e880c787c9b4-cert\") pod \"ingress-canary-m9wjm\" (UID: \"5cffbdee-d63b-457e-8610-e880c787c9b4\") " pod="openshift-ingress-canary/ingress-canary-m9wjm" Mar 18 13:25:06.198472 master-0 kubenswrapper[28504]: I0318 13:25:06.198411 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwn9w\" (UniqueName: \"kubernetes.io/projected/5cffbdee-d63b-457e-8610-e880c787c9b4-kube-api-access-cwn9w\") pod \"ingress-canary-m9wjm\" (UID: \"5cffbdee-d63b-457e-8610-e880c787c9b4\") " pod="openshift-ingress-canary/ingress-canary-m9wjm" Mar 18 13:25:06.334170 master-0 kubenswrapper[28504]: I0318 13:25:06.334043 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m9wjm" Mar 18 13:25:06.548177 master-0 kubenswrapper[28504]: I0318 13:25:06.547551 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-74d52"] Mar 18 13:25:06.551700 master-0 kubenswrapper[28504]: W0318 13:25:06.551658 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ca88a33_ec5e_415c_b976_cfb6ddfe7da4.slice/crio-815740482a5abadc4f51bb43f895540a776256f210d1768299aa9487bf4c1422 WatchSource:0}: Error finding container 815740482a5abadc4f51bb43f895540a776256f210d1768299aa9487bf4c1422: Status 404 returned error can't find the container with id 815740482a5abadc4f51bb43f895540a776256f210d1768299aa9487bf4c1422 Mar 18 13:25:06.907794 master-0 kubenswrapper[28504]: I0318 13:25:06.907727 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m9wjm"] Mar 18 13:25:07.353558 master-0 kubenswrapper[28504]: I0318 13:25:07.353490 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-74d52" event={"ID":"8ca88a33-ec5e-415c-b976-cfb6ddfe7da4","Type":"ContainerStarted","Data":"815740482a5abadc4f51bb43f895540a776256f210d1768299aa9487bf4c1422"} Mar 18 13:25:07.355657 master-0 kubenswrapper[28504]: I0318 13:25:07.355561 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m9wjm" event={"ID":"5cffbdee-d63b-457e-8610-e880c787c9b4","Type":"ContainerStarted","Data":"f80fc0d0a793cbbd759b697edbb45a79873d64c3f1cd9a17b3dd37f0073d8899"} Mar 18 13:25:07.355755 master-0 kubenswrapper[28504]: I0318 13:25:07.355660 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m9wjm" event={"ID":"5cffbdee-d63b-457e-8610-e880c787c9b4","Type":"ContainerStarted","Data":"34b73b33c528604ce37457d9cc7aa87e4f6296b8500c94decdeb2dfad338f23d"} Mar 18 13:25:10.383792 master-0 kubenswrapper[28504]: I0318 13:25:10.383606 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-74d52" event={"ID":"8ca88a33-ec5e-415c-b976-cfb6ddfe7da4","Type":"ContainerStarted","Data":"507fbca4b8411fc95876074378d5e2fae60a3f40e5b22b7926b3bf7aed99a07e"} Mar 18 13:25:10.384551 master-0 kubenswrapper[28504]: I0318 13:25:10.384102 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-76b6568d85-74d52" Mar 18 13:25:10.890011 master-0 kubenswrapper[28504]: I0318 13:25:10.889891 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-m9wjm" podStartSLOduration=5.889873924 podStartE2EDuration="5.889873924s" podCreationTimestamp="2026-03-18 13:25:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:25:07.37979223 +0000 UTC m=+84.874598025" watchObservedRunningTime="2026-03-18 13:25:10.889873924 +0000 UTC m=+88.384679699" Mar 18 13:25:10.892699 master-0 kubenswrapper[28504]: I0318 13:25:10.892643 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-76b6568d85-74d52" podStartSLOduration=2.442029526 podStartE2EDuration="5.892632192s" podCreationTimestamp="2026-03-18 13:25:05 +0000 UTC" firstStartedPulling="2026-03-18 13:25:06.554258416 +0000 UTC m=+84.049064191" lastFinishedPulling="2026-03-18 13:25:10.004861082 +0000 UTC m=+87.499666857" observedRunningTime="2026-03-18 13:25:10.88759965 +0000 UTC m=+88.382405425" watchObservedRunningTime="2026-03-18 13:25:10.892632192 +0000 UTC m=+88.387437967" Mar 18 13:25:10.898291 master-0 kubenswrapper[28504]: I0318 13:25:10.898235 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-76b6568d85-74d52" Mar 18 13:25:11.248347 master-0 kubenswrapper[28504]: I0318 13:25:11.248283 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-66b8ffb895-crvh7"] Mar 18 13:25:11.249611 master-0 kubenswrapper[28504]: I0318 13:25:11.249577 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-crvh7" Mar 18 13:25:11.252612 master-0 kubenswrapper[28504]: I0318 13:25:11.252577 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-stj86" Mar 18 13:25:11.252986 master-0 kubenswrapper[28504]: I0318 13:25:11.252671 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 18 13:25:11.253681 master-0 kubenswrapper[28504]: I0318 13:25:11.253643 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 18 13:25:11.266556 master-0 kubenswrapper[28504]: I0318 13:25:11.266495 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc9xq\" (UniqueName: \"kubernetes.io/projected/2cf62b58-2c1c-4187-8fca-1a60b51a1783-kube-api-access-zc9xq\") pod \"downloads-66b8ffb895-crvh7\" (UID: \"2cf62b58-2c1c-4187-8fca-1a60b51a1783\") " pod="openshift-console/downloads-66b8ffb895-crvh7" Mar 18 13:25:11.271395 master-0 kubenswrapper[28504]: I0318 13:25:11.271347 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-crvh7"] Mar 18 13:25:11.313227 master-0 kubenswrapper[28504]: I0318 13:25:11.313164 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 13:25:11.313461 master-0 kubenswrapper[28504]: I0318 13:25:11.313368 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-4-master-0" podUID="534e47b8-8bec-4dfb-be89-fb018a5edbb0" containerName="installer" containerID="cri-o://dfe679cd1b91dc61ed885bc286d4067a12d5839f5bbdbd907889260931be4b1b" gracePeriod=30 Mar 18 13:25:11.368330 master-0 kubenswrapper[28504]: I0318 13:25:11.368264 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc9xq\" (UniqueName: \"kubernetes.io/projected/2cf62b58-2c1c-4187-8fca-1a60b51a1783-kube-api-access-zc9xq\") pod \"downloads-66b8ffb895-crvh7\" (UID: \"2cf62b58-2c1c-4187-8fca-1a60b51a1783\") " pod="openshift-console/downloads-66b8ffb895-crvh7" Mar 18 13:25:11.397343 master-0 kubenswrapper[28504]: I0318 13:25:11.397296 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc9xq\" (UniqueName: \"kubernetes.io/projected/2cf62b58-2c1c-4187-8fca-1a60b51a1783-kube-api-access-zc9xq\") pod \"downloads-66b8ffb895-crvh7\" (UID: \"2cf62b58-2c1c-4187-8fca-1a60b51a1783\") " pod="openshift-console/downloads-66b8ffb895-crvh7" Mar 18 13:25:11.574886 master-0 kubenswrapper[28504]: I0318 13:25:11.574799 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-crvh7" Mar 18 13:25:12.017964 master-0 kubenswrapper[28504]: I0318 13:25:12.017909 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_534e47b8-8bec-4dfb-be89-fb018a5edbb0/installer/0.log" Mar 18 13:25:12.018273 master-0 kubenswrapper[28504]: I0318 13:25:12.018255 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:25:12.081226 master-0 kubenswrapper[28504]: I0318 13:25:12.081132 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/534e47b8-8bec-4dfb-be89-fb018a5edbb0-kubelet-dir\") pod \"534e47b8-8bec-4dfb-be89-fb018a5edbb0\" (UID: \"534e47b8-8bec-4dfb-be89-fb018a5edbb0\") " Mar 18 13:25:12.081489 master-0 kubenswrapper[28504]: I0318 13:25:12.081476 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/534e47b8-8bec-4dfb-be89-fb018a5edbb0-var-lock\") pod \"534e47b8-8bec-4dfb-be89-fb018a5edbb0\" (UID: \"534e47b8-8bec-4dfb-be89-fb018a5edbb0\") " Mar 18 13:25:12.081653 master-0 kubenswrapper[28504]: I0318 13:25:12.081632 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/534e47b8-8bec-4dfb-be89-fb018a5edbb0-kube-api-access\") pod \"534e47b8-8bec-4dfb-be89-fb018a5edbb0\" (UID: \"534e47b8-8bec-4dfb-be89-fb018a5edbb0\") " Mar 18 13:25:12.082057 master-0 kubenswrapper[28504]: I0318 13:25:12.081446 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/534e47b8-8bec-4dfb-be89-fb018a5edbb0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "534e47b8-8bec-4dfb-be89-fb018a5edbb0" (UID: "534e47b8-8bec-4dfb-be89-fb018a5edbb0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:25:12.082057 master-0 kubenswrapper[28504]: I0318 13:25:12.082031 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/534e47b8-8bec-4dfb-be89-fb018a5edbb0-var-lock" (OuterVolumeSpecName: "var-lock") pod "534e47b8-8bec-4dfb-be89-fb018a5edbb0" (UID: "534e47b8-8bec-4dfb-be89-fb018a5edbb0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:25:12.082226 master-0 kubenswrapper[28504]: I0318 13:25:12.082211 28504 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/534e47b8-8bec-4dfb-be89-fb018a5edbb0-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:12.082288 master-0 kubenswrapper[28504]: I0318 13:25:12.082278 28504 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/534e47b8-8bec-4dfb-be89-fb018a5edbb0-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:12.084957 master-0 kubenswrapper[28504]: I0318 13:25:12.084872 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/534e47b8-8bec-4dfb-be89-fb018a5edbb0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "534e47b8-8bec-4dfb-be89-fb018a5edbb0" (UID: "534e47b8-8bec-4dfb-be89-fb018a5edbb0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:25:12.111761 master-0 kubenswrapper[28504]: I0318 13:25:12.111695 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-crvh7"] Mar 18 13:25:12.117506 master-0 kubenswrapper[28504]: W0318 13:25:12.117454 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cf62b58_2c1c_4187_8fca_1a60b51a1783.slice/crio-82e004429c5fa23b90ace69f80a2d0e52ed8a7fc02c8887d0621f86a39b8405c WatchSource:0}: Error finding container 82e004429c5fa23b90ace69f80a2d0e52ed8a7fc02c8887d0621f86a39b8405c: Status 404 returned error can't find the container with id 82e004429c5fa23b90ace69f80a2d0e52ed8a7fc02c8887d0621f86a39b8405c Mar 18 13:25:12.183373 master-0 kubenswrapper[28504]: I0318 13:25:12.183324 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/534e47b8-8bec-4dfb-be89-fb018a5edbb0-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:12.404089 master-0 kubenswrapper[28504]: I0318 13:25:12.403852 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-crvh7" event={"ID":"2cf62b58-2c1c-4187-8fca-1a60b51a1783","Type":"ContainerStarted","Data":"82e004429c5fa23b90ace69f80a2d0e52ed8a7fc02c8887d0621f86a39b8405c"} Mar 18 13:25:12.406512 master-0 kubenswrapper[28504]: I0318 13:25:12.406460 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_534e47b8-8bec-4dfb-be89-fb018a5edbb0/installer/0.log" Mar 18 13:25:12.406632 master-0 kubenswrapper[28504]: I0318 13:25:12.406522 28504 generic.go:334] "Generic (PLEG): container finished" podID="534e47b8-8bec-4dfb-be89-fb018a5edbb0" containerID="dfe679cd1b91dc61ed885bc286d4067a12d5839f5bbdbd907889260931be4b1b" exitCode=1 Mar 18 13:25:12.406632 master-0 kubenswrapper[28504]: I0318 13:25:12.406593 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:25:12.406820 master-0 kubenswrapper[28504]: I0318 13:25:12.406594 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"534e47b8-8bec-4dfb-be89-fb018a5edbb0","Type":"ContainerDied","Data":"dfe679cd1b91dc61ed885bc286d4067a12d5839f5bbdbd907889260931be4b1b"} Mar 18 13:25:12.406820 master-0 kubenswrapper[28504]: I0318 13:25:12.406684 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"534e47b8-8bec-4dfb-be89-fb018a5edbb0","Type":"ContainerDied","Data":"2ae5f4c0ab40586c4ee37458baf59eca377b8a70c2d036b5815ace707c5659ee"} Mar 18 13:25:12.406820 master-0 kubenswrapper[28504]: I0318 13:25:12.406719 28504 scope.go:117] "RemoveContainer" containerID="dfe679cd1b91dc61ed885bc286d4067a12d5839f5bbdbd907889260931be4b1b" Mar 18 13:25:12.431282 master-0 kubenswrapper[28504]: I0318 13:25:12.429077 28504 scope.go:117] "RemoveContainer" containerID="dfe679cd1b91dc61ed885bc286d4067a12d5839f5bbdbd907889260931be4b1b" Mar 18 13:25:12.431282 master-0 kubenswrapper[28504]: E0318 13:25:12.429508 28504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfe679cd1b91dc61ed885bc286d4067a12d5839f5bbdbd907889260931be4b1b\": container with ID starting with dfe679cd1b91dc61ed885bc286d4067a12d5839f5bbdbd907889260931be4b1b not found: ID does not exist" containerID="dfe679cd1b91dc61ed885bc286d4067a12d5839f5bbdbd907889260931be4b1b" Mar 18 13:25:12.431282 master-0 kubenswrapper[28504]: I0318 13:25:12.429543 28504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe679cd1b91dc61ed885bc286d4067a12d5839f5bbdbd907889260931be4b1b"} err="failed to get container status \"dfe679cd1b91dc61ed885bc286d4067a12d5839f5bbdbd907889260931be4b1b\": rpc error: code = NotFound desc = could not find container \"dfe679cd1b91dc61ed885bc286d4067a12d5839f5bbdbd907889260931be4b1b\": container with ID starting with dfe679cd1b91dc61ed885bc286d4067a12d5839f5bbdbd907889260931be4b1b not found: ID does not exist" Mar 18 13:25:12.459623 master-0 kubenswrapper[28504]: I0318 13:25:12.459564 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 13:25:12.468531 master-0 kubenswrapper[28504]: I0318 13:25:12.468421 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 13:25:12.771715 master-0 kubenswrapper[28504]: I0318 13:25:12.770859 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="534e47b8-8bec-4dfb-be89-fb018a5edbb0" path="/var/lib/kubelet/pods/534e47b8-8bec-4dfb-be89-fb018a5edbb0/volumes" Mar 18 13:25:16.325903 master-0 kubenswrapper[28504]: I0318 13:25:16.323510 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 13:25:16.325903 master-0 kubenswrapper[28504]: E0318 13:25:16.323841 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="534e47b8-8bec-4dfb-be89-fb018a5edbb0" containerName="installer" Mar 18 13:25:16.325903 master-0 kubenswrapper[28504]: I0318 13:25:16.323863 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="534e47b8-8bec-4dfb-be89-fb018a5edbb0" containerName="installer" Mar 18 13:25:16.325903 master-0 kubenswrapper[28504]: I0318 13:25:16.324445 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="534e47b8-8bec-4dfb-be89-fb018a5edbb0" containerName="installer" Mar 18 13:25:16.325903 master-0 kubenswrapper[28504]: I0318 13:25:16.325041 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:25:16.327841 master-0 kubenswrapper[28504]: I0318 13:25:16.327786 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 13:25:16.332586 master-0 kubenswrapper[28504]: I0318 13:25:16.332530 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-n2lc2" Mar 18 13:25:16.339179 master-0 kubenswrapper[28504]: I0318 13:25:16.339108 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a495b154-7a49-4e26-b6cf-421686c986ff-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"a495b154-7a49-4e26-b6cf-421686c986ff\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:25:16.339305 master-0 kubenswrapper[28504]: I0318 13:25:16.339183 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a495b154-7a49-4e26-b6cf-421686c986ff-kube-api-access\") pod \"installer-5-master-0\" (UID: \"a495b154-7a49-4e26-b6cf-421686c986ff\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:25:16.339305 master-0 kubenswrapper[28504]: I0318 13:25:16.339249 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a495b154-7a49-4e26-b6cf-421686c986ff-var-lock\") pod \"installer-5-master-0\" (UID: \"a495b154-7a49-4e26-b6cf-421686c986ff\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:25:16.385229 master-0 kubenswrapper[28504]: I0318 13:25:16.381701 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 13:25:16.446957 master-0 kubenswrapper[28504]: I0318 13:25:16.446796 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a495b154-7a49-4e26-b6cf-421686c986ff-var-lock\") pod \"installer-5-master-0\" (UID: \"a495b154-7a49-4e26-b6cf-421686c986ff\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:25:16.446957 master-0 kubenswrapper[28504]: I0318 13:25:16.446888 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a495b154-7a49-4e26-b6cf-421686c986ff-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"a495b154-7a49-4e26-b6cf-421686c986ff\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:25:16.446957 master-0 kubenswrapper[28504]: I0318 13:25:16.446917 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a495b154-7a49-4e26-b6cf-421686c986ff-kube-api-access\") pod \"installer-5-master-0\" (UID: \"a495b154-7a49-4e26-b6cf-421686c986ff\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:25:16.450960 master-0 kubenswrapper[28504]: I0318 13:25:16.447486 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a495b154-7a49-4e26-b6cf-421686c986ff-var-lock\") pod \"installer-5-master-0\" (UID: \"a495b154-7a49-4e26-b6cf-421686c986ff\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:25:16.450960 master-0 kubenswrapper[28504]: I0318 13:25:16.447529 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a495b154-7a49-4e26-b6cf-421686c986ff-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"a495b154-7a49-4e26-b6cf-421686c986ff\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:25:16.500839 master-0 kubenswrapper[28504]: I0318 13:25:16.500782 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a495b154-7a49-4e26-b6cf-421686c986ff-kube-api-access\") pod \"installer-5-master-0\" (UID: \"a495b154-7a49-4e26-b6cf-421686c986ff\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:25:16.649866 master-0 kubenswrapper[28504]: I0318 13:25:16.649749 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:25:17.325239 master-0 kubenswrapper[28504]: I0318 13:25:17.325094 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 13:25:17.331580 master-0 kubenswrapper[28504]: W0318 13:25:17.331485 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda495b154_7a49_4e26_b6cf_421686c986ff.slice/crio-ebed4b008dd25291b94a2a7eb9066f946cf66f82207f5f070c5e9b03e42311bf WatchSource:0}: Error finding container ebed4b008dd25291b94a2a7eb9066f946cf66f82207f5f070c5e9b03e42311bf: Status 404 returned error can't find the container with id ebed4b008dd25291b94a2a7eb9066f946cf66f82207f5f070c5e9b03e42311bf Mar 18 13:25:17.445835 master-0 kubenswrapper[28504]: I0318 13:25:17.445767 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"a495b154-7a49-4e26-b6cf-421686c986ff","Type":"ContainerStarted","Data":"ebed4b008dd25291b94a2a7eb9066f946cf66f82207f5f070c5e9b03e42311bf"} Mar 18 13:25:18.461791 master-0 kubenswrapper[28504]: I0318 13:25:18.461723 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"a495b154-7a49-4e26-b6cf-421686c986ff","Type":"ContainerStarted","Data":"098a3e0f51fd0f111e80c507c8f6935665f0782f2d362df32b723bf95eedc501"} Mar 18 13:25:20.642794 master-0 kubenswrapper[28504]: I0318 13:25:20.642718 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=4.642700458 podStartE2EDuration="4.642700458s" podCreationTimestamp="2026-03-18 13:25:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:25:18.509920179 +0000 UTC m=+96.004725964" watchObservedRunningTime="2026-03-18 13:25:20.642700458 +0000 UTC m=+98.137506233" Mar 18 13:25:20.644621 master-0 kubenswrapper[28504]: I0318 13:25:20.644562 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f8d98648f-9x5n4"] Mar 18 13:25:20.645749 master-0 kubenswrapper[28504]: I0318 13:25:20.645670 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:20.649675 master-0 kubenswrapper[28504]: I0318 13:25:20.649638 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 18 13:25:20.649859 master-0 kubenswrapper[28504]: I0318 13:25:20.649811 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 18 13:25:20.650413 master-0 kubenswrapper[28504]: I0318 13:25:20.649972 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 18 13:25:20.650413 master-0 kubenswrapper[28504]: I0318 13:25:20.650113 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-lktsx" Mar 18 13:25:20.650413 master-0 kubenswrapper[28504]: I0318 13:25:20.650226 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 18 13:25:20.650413 master-0 kubenswrapper[28504]: I0318 13:25:20.650321 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 18 13:25:20.877893 master-0 kubenswrapper[28504]: I0318 13:25:20.874539 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-service-ca\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:20.877893 master-0 kubenswrapper[28504]: I0318 13:25:20.874611 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hvtg\" (UniqueName: \"kubernetes.io/projected/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-kube-api-access-6hvtg\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:20.877893 master-0 kubenswrapper[28504]: I0318 13:25:20.874690 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-oauth-serving-cert\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:20.877893 master-0 kubenswrapper[28504]: I0318 13:25:20.874726 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-oauth-config\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:20.877893 master-0 kubenswrapper[28504]: I0318 13:25:20.874753 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-serving-cert\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:20.877893 master-0 kubenswrapper[28504]: I0318 13:25:20.874768 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-config\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:20.884182 master-0 kubenswrapper[28504]: I0318 13:25:20.884110 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f8d98648f-9x5n4"] Mar 18 13:25:20.976219 master-0 kubenswrapper[28504]: I0318 13:25:20.976154 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-service-ca\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:20.977186 master-0 kubenswrapper[28504]: I0318 13:25:20.976285 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hvtg\" (UniqueName: \"kubernetes.io/projected/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-kube-api-access-6hvtg\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:20.977407 master-0 kubenswrapper[28504]: I0318 13:25:20.977283 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-service-ca\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:20.978055 master-0 kubenswrapper[28504]: I0318 13:25:20.977975 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-oauth-serving-cert\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:20.979585 master-0 kubenswrapper[28504]: I0318 13:25:20.979549 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-oauth-serving-cert\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:20.980346 master-0 kubenswrapper[28504]: I0318 13:25:20.980259 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-oauth-config\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:20.980498 master-0 kubenswrapper[28504]: I0318 13:25:20.980455 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-serving-cert\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:20.980596 master-0 kubenswrapper[28504]: I0318 13:25:20.980535 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-config\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:20.981581 master-0 kubenswrapper[28504]: I0318 13:25:20.981512 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-config\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:20.981920 master-0 kubenswrapper[28504]: I0318 13:25:20.981888 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-oauth-config\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:20.983954 master-0 kubenswrapper[28504]: I0318 13:25:20.983901 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-serving-cert\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:21.047789 master-0 kubenswrapper[28504]: I0318 13:25:21.047713 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hvtg\" (UniqueName: \"kubernetes.io/projected/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-kube-api-access-6hvtg\") pod \"console-f8d98648f-9x5n4\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:21.277077 master-0 kubenswrapper[28504]: I0318 13:25:21.276920 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:22.081321 master-0 kubenswrapper[28504]: I0318 13:25:22.081265 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f8d98648f-9x5n4"] Mar 18 13:25:22.085232 master-0 kubenswrapper[28504]: W0318 13:25:22.085182 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b35ff9a_60ec_41d3_977e_c2fca74c16e0.slice/crio-3c7ab017383fc797c23b30b1bb737974173ea769320a830f56cccb6ac068f9f7 WatchSource:0}: Error finding container 3c7ab017383fc797c23b30b1bb737974173ea769320a830f56cccb6ac068f9f7: Status 404 returned error can't find the container with id 3c7ab017383fc797c23b30b1bb737974173ea769320a830f56cccb6ac068f9f7 Mar 18 13:25:22.526322 master-0 kubenswrapper[28504]: I0318 13:25:22.526222 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f8d98648f-9x5n4" event={"ID":"5b35ff9a-60ec-41d3-977e-c2fca74c16e0","Type":"ContainerStarted","Data":"3c7ab017383fc797c23b30b1bb737974173ea769320a830f56cccb6ac068f9f7"} Mar 18 13:25:23.141020 master-0 kubenswrapper[28504]: I0318 13:25:23.139545 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-86cfd4f585-tfs7z"] Mar 18 13:25:23.141540 master-0 kubenswrapper[28504]: I0318 13:25:23.141399 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.158753 master-0 kubenswrapper[28504]: I0318 13:25:23.157333 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 18 13:25:23.165665 master-0 kubenswrapper[28504]: I0318 13:25:23.165598 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-86cfd4f585-tfs7z"] Mar 18 13:25:23.292283 master-0 kubenswrapper[28504]: I0318 13:25:23.292219 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" podUID="096901d4-7e1a-4de8-b6b2-0acf03e98472" containerName="oauth-openshift" containerID="cri-o://9b2b3e896a922ee97e5f82a9f1c9bdfc013e4103eae660d131e8d231e759e084" gracePeriod=15 Mar 18 13:25:23.315830 master-0 kubenswrapper[28504]: I0318 13:25:23.315755 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-service-ca\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.316111 master-0 kubenswrapper[28504]: I0318 13:25:23.315885 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-config\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.316111 master-0 kubenswrapper[28504]: I0318 13:25:23.315926 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-serving-cert\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.316111 master-0 kubenswrapper[28504]: I0318 13:25:23.315985 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-trusted-ca-bundle\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.316111 master-0 kubenswrapper[28504]: I0318 13:25:23.316012 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-oauth-config\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.316111 master-0 kubenswrapper[28504]: I0318 13:25:23.316046 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgnpw\" (UniqueName: \"kubernetes.io/projected/fe6cd387-db28-4db0-b933-ba58fcaf8f24-kube-api-access-hgnpw\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.316111 master-0 kubenswrapper[28504]: I0318 13:25:23.316090 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-oauth-serving-cert\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.417620 master-0 kubenswrapper[28504]: I0318 13:25:23.417503 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-serving-cert\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.417620 master-0 kubenswrapper[28504]: I0318 13:25:23.417578 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-trusted-ca-bundle\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.417620 master-0 kubenswrapper[28504]: I0318 13:25:23.417606 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-oauth-config\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.417846 master-0 kubenswrapper[28504]: I0318 13:25:23.417645 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgnpw\" (UniqueName: \"kubernetes.io/projected/fe6cd387-db28-4db0-b933-ba58fcaf8f24-kube-api-access-hgnpw\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.417846 master-0 kubenswrapper[28504]: I0318 13:25:23.417688 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-oauth-serving-cert\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.417846 master-0 kubenswrapper[28504]: I0318 13:25:23.417740 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-service-ca\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.417846 master-0 kubenswrapper[28504]: I0318 13:25:23.417774 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-config\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.418787 master-0 kubenswrapper[28504]: I0318 13:25:23.418610 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-config\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.420442 master-0 kubenswrapper[28504]: I0318 13:25:23.419358 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-oauth-serving-cert\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.421510 master-0 kubenswrapper[28504]: I0318 13:25:23.421470 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-trusted-ca-bundle\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.421775 master-0 kubenswrapper[28504]: I0318 13:25:23.421740 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-service-ca\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.437171 master-0 kubenswrapper[28504]: I0318 13:25:23.433889 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-oauth-config\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.437171 master-0 kubenswrapper[28504]: I0318 13:25:23.434158 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-serving-cert\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.439644 master-0 kubenswrapper[28504]: I0318 13:25:23.439571 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgnpw\" (UniqueName: \"kubernetes.io/projected/fe6cd387-db28-4db0-b933-ba58fcaf8f24-kube-api-access-hgnpw\") pod \"console-86cfd4f585-tfs7z\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.471687 master-0 kubenswrapper[28504]: I0318 13:25:23.471509 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:23.556283 master-0 kubenswrapper[28504]: I0318 13:25:23.553616 28504 generic.go:334] "Generic (PLEG): container finished" podID="096901d4-7e1a-4de8-b6b2-0acf03e98472" containerID="9b2b3e896a922ee97e5f82a9f1c9bdfc013e4103eae660d131e8d231e759e084" exitCode=0 Mar 18 13:25:23.556283 master-0 kubenswrapper[28504]: I0318 13:25:23.553700 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" event={"ID":"096901d4-7e1a-4de8-b6b2-0acf03e98472","Type":"ContainerDied","Data":"9b2b3e896a922ee97e5f82a9f1c9bdfc013e4103eae660d131e8d231e759e084"} Mar 18 13:25:24.296311 master-0 kubenswrapper[28504]: I0318 13:25:24.296273 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:25:24.345430 master-0 kubenswrapper[28504]: I0318 13:25:24.345370 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6c4f65fbf4-78m99"] Mar 18 13:25:24.345658 master-0 kubenswrapper[28504]: E0318 13:25:24.345630 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096901d4-7e1a-4de8-b6b2-0acf03e98472" containerName="oauth-openshift" Mar 18 13:25:24.345658 master-0 kubenswrapper[28504]: I0318 13:25:24.345647 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="096901d4-7e1a-4de8-b6b2-0acf03e98472" containerName="oauth-openshift" Mar 18 13:25:24.345814 master-0 kubenswrapper[28504]: I0318 13:25:24.345771 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="096901d4-7e1a-4de8-b6b2-0acf03e98472" containerName="oauth-openshift" Mar 18 13:25:24.346270 master-0 kubenswrapper[28504]: I0318 13:25:24.346232 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.374501 master-0 kubenswrapper[28504]: I0318 13:25:24.374225 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6c4f65fbf4-78m99"] Mar 18 13:25:24.398103 master-0 kubenswrapper[28504]: I0318 13:25:24.397539 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-86cfd4f585-tfs7z"] Mar 18 13:25:24.423087 master-0 kubenswrapper[28504]: W0318 13:25:24.417226 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe6cd387_db28_4db0_b933_ba58fcaf8f24.slice/crio-9a30dd194d8f1bf9917bc22908b4ef1f9d46e1509a2f94cd423b3dfc7087a162 WatchSource:0}: Error finding container 9a30dd194d8f1bf9917bc22908b4ef1f9d46e1509a2f94cd423b3dfc7087a162: Status 404 returned error can't find the container with id 9a30dd194d8f1bf9917bc22908b4ef1f9d46e1509a2f94cd423b3dfc7087a162 Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.437135 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-login\") pod \"096901d4-7e1a-4de8-b6b2-0acf03e98472\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.437255 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-service-ca\") pod \"096901d4-7e1a-4de8-b6b2-0acf03e98472\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.437322 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-trusted-ca-bundle\") pod \"096901d4-7e1a-4de8-b6b2-0acf03e98472\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.437395 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-ocp-branding-template\") pod \"096901d4-7e1a-4de8-b6b2-0acf03e98472\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.437427 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-audit-policies\") pod \"096901d4-7e1a-4de8-b6b2-0acf03e98472\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.437539 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/096901d4-7e1a-4de8-b6b2-0acf03e98472-audit-dir\") pod \"096901d4-7e1a-4de8-b6b2-0acf03e98472\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.437584 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-error\") pod \"096901d4-7e1a-4de8-b6b2-0acf03e98472\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.438843 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "096901d4-7e1a-4de8-b6b2-0acf03e98472" (UID: "096901d4-7e1a-4de8-b6b2-0acf03e98472"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.438893 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "096901d4-7e1a-4de8-b6b2-0acf03e98472" (UID: "096901d4-7e1a-4de8-b6b2-0acf03e98472"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.438992 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/096901d4-7e1a-4de8-b6b2-0acf03e98472-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "096901d4-7e1a-4de8-b6b2-0acf03e98472" (UID: "096901d4-7e1a-4de8-b6b2-0acf03e98472"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.439126 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-cliconfig\") pod \"096901d4-7e1a-4de8-b6b2-0acf03e98472\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.439227 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4z4s\" (UniqueName: \"kubernetes.io/projected/096901d4-7e1a-4de8-b6b2-0acf03e98472-kube-api-access-c4z4s\") pod \"096901d4-7e1a-4de8-b6b2-0acf03e98472\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.439211 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "096901d4-7e1a-4de8-b6b2-0acf03e98472" (UID: "096901d4-7e1a-4de8-b6b2-0acf03e98472"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.439288 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-provider-selection\") pod \"096901d4-7e1a-4de8-b6b2-0acf03e98472\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.439379 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-serving-cert\") pod \"096901d4-7e1a-4de8-b6b2-0acf03e98472\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.439402 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-router-certs\") pod \"096901d4-7e1a-4de8-b6b2-0acf03e98472\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.439440 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-session\") pod \"096901d4-7e1a-4de8-b6b2-0acf03e98472\" (UID: \"096901d4-7e1a-4de8-b6b2-0acf03e98472\") " Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.439788 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "096901d4-7e1a-4de8-b6b2-0acf03e98472" (UID: "096901d4-7e1a-4de8-b6b2-0acf03e98472"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.441148 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.441172 28504 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.441186 28504 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/096901d4-7e1a-4de8-b6b2-0acf03e98472-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.441196 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:24.442106 master-0 kubenswrapper[28504]: I0318 13:25:24.441206 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:24.444561 master-0 kubenswrapper[28504]: I0318 13:25:24.442871 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "096901d4-7e1a-4de8-b6b2-0acf03e98472" (UID: "096901d4-7e1a-4de8-b6b2-0acf03e98472"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:25:24.444561 master-0 kubenswrapper[28504]: I0318 13:25:24.444507 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "096901d4-7e1a-4de8-b6b2-0acf03e98472" (UID: "096901d4-7e1a-4de8-b6b2-0acf03e98472"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:25:24.447754 master-0 kubenswrapper[28504]: I0318 13:25:24.447648 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "096901d4-7e1a-4de8-b6b2-0acf03e98472" (UID: "096901d4-7e1a-4de8-b6b2-0acf03e98472"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:25:24.448158 master-0 kubenswrapper[28504]: I0318 13:25:24.448101 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "096901d4-7e1a-4de8-b6b2-0acf03e98472" (UID: "096901d4-7e1a-4de8-b6b2-0acf03e98472"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:25:24.448421 master-0 kubenswrapper[28504]: I0318 13:25:24.448377 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "096901d4-7e1a-4de8-b6b2-0acf03e98472" (UID: "096901d4-7e1a-4de8-b6b2-0acf03e98472"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:25:24.448515 master-0 kubenswrapper[28504]: I0318 13:25:24.448468 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "096901d4-7e1a-4de8-b6b2-0acf03e98472" (UID: "096901d4-7e1a-4de8-b6b2-0acf03e98472"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:25:24.449090 master-0 kubenswrapper[28504]: I0318 13:25:24.448957 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/096901d4-7e1a-4de8-b6b2-0acf03e98472-kube-api-access-c4z4s" (OuterVolumeSpecName: "kube-api-access-c4z4s") pod "096901d4-7e1a-4de8-b6b2-0acf03e98472" (UID: "096901d4-7e1a-4de8-b6b2-0acf03e98472"). InnerVolumeSpecName "kube-api-access-c4z4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:25:24.449525 master-0 kubenswrapper[28504]: I0318 13:25:24.449473 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "096901d4-7e1a-4de8-b6b2-0acf03e98472" (UID: "096901d4-7e1a-4de8-b6b2-0acf03e98472"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:25:24.543046 master-0 kubenswrapper[28504]: I0318 13:25:24.542823 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-session\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.543046 master-0 kubenswrapper[28504]: I0318 13:25:24.542907 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.543046 master-0 kubenswrapper[28504]: I0318 13:25:24.542993 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.543046 master-0 kubenswrapper[28504]: I0318 13:25:24.543043 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-login\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.543421 master-0 kubenswrapper[28504]: I0318 13:25:24.543105 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.543421 master-0 kubenswrapper[28504]: I0318 13:25:24.543127 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-audit-policies\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.543421 master-0 kubenswrapper[28504]: I0318 13:25:24.543148 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzpwf\" (UniqueName: \"kubernetes.io/projected/436fad70-517b-4375-ac49-77829a6969de-kube-api-access-qzpwf\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.543421 master-0 kubenswrapper[28504]: I0318 13:25:24.543175 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.543421 master-0 kubenswrapper[28504]: I0318 13:25:24.543206 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.543421 master-0 kubenswrapper[28504]: I0318 13:25:24.543229 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.543421 master-0 kubenswrapper[28504]: I0318 13:25:24.543249 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-error\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.543421 master-0 kubenswrapper[28504]: I0318 13:25:24.543284 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/436fad70-517b-4375-ac49-77829a6969de-audit-dir\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.543421 master-0 kubenswrapper[28504]: I0318 13:25:24.543315 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.543421 master-0 kubenswrapper[28504]: I0318 13:25:24.543375 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:24.543421 master-0 kubenswrapper[28504]: I0318 13:25:24.543392 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:24.543421 master-0 kubenswrapper[28504]: I0318 13:25:24.543404 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4z4s\" (UniqueName: \"kubernetes.io/projected/096901d4-7e1a-4de8-b6b2-0acf03e98472-kube-api-access-c4z4s\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:24.543421 master-0 kubenswrapper[28504]: I0318 13:25:24.543420 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:24.544087 master-0 kubenswrapper[28504]: I0318 13:25:24.543444 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:24.544087 master-0 kubenswrapper[28504]: I0318 13:25:24.543459 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:24.544087 master-0 kubenswrapper[28504]: I0318 13:25:24.543487 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:24.544087 master-0 kubenswrapper[28504]: I0318 13:25:24.543499 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/096901d4-7e1a-4de8-b6b2-0acf03e98472-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:24.573652 master-0 kubenswrapper[28504]: I0318 13:25:24.573598 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" event={"ID":"096901d4-7e1a-4de8-b6b2-0acf03e98472","Type":"ContainerDied","Data":"4e2a264fd1049a285d9ab7a08ee511cf8934351ad13ece3fcd6dcfa4bb0512f7"} Mar 18 13:25:24.573652 master-0 kubenswrapper[28504]: I0318 13:25:24.573657 28504 scope.go:117] "RemoveContainer" containerID="9b2b3e896a922ee97e5f82a9f1c9bdfc013e4103eae660d131e8d231e759e084" Mar 18 13:25:24.573920 master-0 kubenswrapper[28504]: I0318 13:25:24.573762 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7cbf579478-nnclj" Mar 18 13:25:24.575392 master-0 kubenswrapper[28504]: I0318 13:25:24.575363 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86cfd4f585-tfs7z" event={"ID":"fe6cd387-db28-4db0-b933-ba58fcaf8f24","Type":"ContainerStarted","Data":"9a30dd194d8f1bf9917bc22908b4ef1f9d46e1509a2f94cd423b3dfc7087a162"} Mar 18 13:25:24.614569 master-0 kubenswrapper[28504]: I0318 13:25:24.614479 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-7cbf579478-nnclj"] Mar 18 13:25:24.623974 master-0 kubenswrapper[28504]: I0318 13:25:24.623876 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-7cbf579478-nnclj"] Mar 18 13:25:24.645572 master-0 kubenswrapper[28504]: I0318 13:25:24.645483 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-audit-policies\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.645780 master-0 kubenswrapper[28504]: I0318 13:25:24.645579 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.645780 master-0 kubenswrapper[28504]: I0318 13:25:24.645627 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzpwf\" (UniqueName: \"kubernetes.io/projected/436fad70-517b-4375-ac49-77829a6969de-kube-api-access-qzpwf\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.645780 master-0 kubenswrapper[28504]: I0318 13:25:24.645669 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.646005 master-0 kubenswrapper[28504]: I0318 13:25:24.645915 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.646078 master-0 kubenswrapper[28504]: I0318 13:25:24.646044 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.646115 master-0 kubenswrapper[28504]: I0318 13:25:24.646090 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-error\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.646787 master-0 kubenswrapper[28504]: I0318 13:25:24.646173 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/436fad70-517b-4375-ac49-77829a6969de-audit-dir\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.646787 master-0 kubenswrapper[28504]: I0318 13:25:24.646269 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.646884 master-0 kubenswrapper[28504]: I0318 13:25:24.646847 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/436fad70-517b-4375-ac49-77829a6969de-audit-dir\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.647378 master-0 kubenswrapper[28504]: I0318 13:25:24.647036 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-audit-policies\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.647378 master-0 kubenswrapper[28504]: I0318 13:25:24.647178 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-session\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.648179 master-0 kubenswrapper[28504]: I0318 13:25:24.647690 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.648179 master-0 kubenswrapper[28504]: I0318 13:25:24.647709 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.648179 master-0 kubenswrapper[28504]: I0318 13:25:24.647725 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.648179 master-0 kubenswrapper[28504]: I0318 13:25:24.647872 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-login\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.648488 master-0 kubenswrapper[28504]: I0318 13:25:24.648453 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.650462 master-0 kubenswrapper[28504]: I0318 13:25:24.650421 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.650795 master-0 kubenswrapper[28504]: I0318 13:25:24.650733 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.657057 master-0 kubenswrapper[28504]: I0318 13:25:24.656516 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-login\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.657057 master-0 kubenswrapper[28504]: I0318 13:25:24.656631 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-error\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.657057 master-0 kubenswrapper[28504]: I0318 13:25:24.656810 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.657057 master-0 kubenswrapper[28504]: I0318 13:25:24.657012 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.657057 master-0 kubenswrapper[28504]: I0318 13:25:24.656911 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-session\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.657057 master-0 kubenswrapper[28504]: I0318 13:25:24.657054 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.665172 master-0 kubenswrapper[28504]: I0318 13:25:24.665120 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzpwf\" (UniqueName: \"kubernetes.io/projected/436fad70-517b-4375-ac49-77829a6969de-kube-api-access-qzpwf\") pod \"oauth-openshift-6c4f65fbf4-78m99\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.687209 master-0 kubenswrapper[28504]: I0318 13:25:24.686991 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:24.761488 master-0 kubenswrapper[28504]: I0318 13:25:24.760560 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="096901d4-7e1a-4de8-b6b2-0acf03e98472" path="/var/lib/kubelet/pods/096901d4-7e1a-4de8-b6b2-0acf03e98472/volumes" Mar 18 13:25:25.111020 master-0 kubenswrapper[28504]: I0318 13:25:25.110868 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6c4f65fbf4-78m99"] Mar 18 13:25:25.121536 master-0 kubenswrapper[28504]: W0318 13:25:25.120910 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod436fad70_517b_4375_ac49_77829a6969de.slice/crio-02aaf6f550f51aaa40a24cc5df57348ee35c9dfa40627854293c9e1f22ccf267 WatchSource:0}: Error finding container 02aaf6f550f51aaa40a24cc5df57348ee35c9dfa40627854293c9e1f22ccf267: Status 404 returned error can't find the container with id 02aaf6f550f51aaa40a24cc5df57348ee35c9dfa40627854293c9e1f22ccf267 Mar 18 13:25:25.585526 master-0 kubenswrapper[28504]: I0318 13:25:25.585463 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" event={"ID":"436fad70-517b-4375-ac49-77829a6969de","Type":"ContainerStarted","Data":"02aaf6f550f51aaa40a24cc5df57348ee35c9dfa40627854293c9e1f22ccf267"} Mar 18 13:25:26.665292 master-0 kubenswrapper[28504]: I0318 13:25:26.664809 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" event={"ID":"436fad70-517b-4375-ac49-77829a6969de","Type":"ContainerStarted","Data":"f0a2722d1309cf82ebe9b74c6a5664dd8abcb6d18e7e2ec265add83a45d29b08"} Mar 18 13:25:27.680721 master-0 kubenswrapper[28504]: I0318 13:25:27.677844 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:27.684014 master-0 kubenswrapper[28504]: I0318 13:25:27.683975 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:25:27.721632 master-0 kubenswrapper[28504]: I0318 13:25:27.721497 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" podStartSLOduration=32.721469436 podStartE2EDuration="32.721469436s" podCreationTimestamp="2026-03-18 13:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:25:27.712653497 +0000 UTC m=+105.207459272" watchObservedRunningTime="2026-03-18 13:25:27.721469436 +0000 UTC m=+105.216275211" Mar 18 13:25:29.188046 master-0 kubenswrapper[28504]: I0318 13:25:29.186117 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-pdh89"] Mar 18 13:25:29.188046 master-0 kubenswrapper[28504]: I0318 13:25:29.187250 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-pdh89" Mar 18 13:25:29.190496 master-0 kubenswrapper[28504]: I0318 13:25:29.190195 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 18 13:25:29.190496 master-0 kubenswrapper[28504]: I0318 13:25:29.190375 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 18 13:25:29.210891 master-0 kubenswrapper[28504]: I0318 13:25:29.209840 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-pdh89"] Mar 18 13:25:29.212807 master-0 kubenswrapper[28504]: I0318 13:25:29.212260 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5456569b-112b-449d-a774-24ca1a5e91ec-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-pdh89\" (UID: \"5456569b-112b-449d-a774-24ca1a5e91ec\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-pdh89" Mar 18 13:25:29.212807 master-0 kubenswrapper[28504]: I0318 13:25:29.212473 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5456569b-112b-449d-a774-24ca1a5e91ec-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-pdh89\" (UID: \"5456569b-112b-449d-a774-24ca1a5e91ec\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-pdh89" Mar 18 13:25:29.318086 master-0 kubenswrapper[28504]: I0318 13:25:29.313586 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5456569b-112b-449d-a774-24ca1a5e91ec-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-pdh89\" (UID: \"5456569b-112b-449d-a774-24ca1a5e91ec\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-pdh89" Mar 18 13:25:29.318086 master-0 kubenswrapper[28504]: I0318 13:25:29.313696 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5456569b-112b-449d-a774-24ca1a5e91ec-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-pdh89\" (UID: \"5456569b-112b-449d-a774-24ca1a5e91ec\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-pdh89" Mar 18 13:25:29.318086 master-0 kubenswrapper[28504]: E0318 13:25:29.313882 28504 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 13:25:29.318086 master-0 kubenswrapper[28504]: E0318 13:25:29.313965 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5456569b-112b-449d-a774-24ca1a5e91ec-networking-console-plugin-cert podName:5456569b-112b-449d-a774-24ca1a5e91ec nodeName:}" failed. No retries permitted until 2026-03-18 13:25:29.813924203 +0000 UTC m=+107.308729988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5456569b-112b-449d-a774-24ca1a5e91ec-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-pdh89" (UID: "5456569b-112b-449d-a774-24ca1a5e91ec") : secret "networking-console-plugin-cert" not found Mar 18 13:25:29.318086 master-0 kubenswrapper[28504]: I0318 13:25:29.315445 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5456569b-112b-449d-a774-24ca1a5e91ec-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-pdh89\" (UID: \"5456569b-112b-449d-a774-24ca1a5e91ec\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-pdh89" Mar 18 13:25:29.873735 master-0 kubenswrapper[28504]: I0318 13:25:29.873643 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5456569b-112b-449d-a774-24ca1a5e91ec-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-pdh89\" (UID: \"5456569b-112b-449d-a774-24ca1a5e91ec\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-pdh89" Mar 18 13:25:29.887052 master-0 kubenswrapper[28504]: I0318 13:25:29.886883 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5456569b-112b-449d-a774-24ca1a5e91ec-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-pdh89\" (UID: \"5456569b-112b-449d-a774-24ca1a5e91ec\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-pdh89" Mar 18 13:25:30.139970 master-0 kubenswrapper[28504]: I0318 13:25:30.139705 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-pdh89" Mar 18 13:25:30.972181 master-0 kubenswrapper[28504]: I0318 13:25:30.972104 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-pdh89"] Mar 18 13:25:30.981341 master-0 kubenswrapper[28504]: W0318 13:25:30.981255 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5456569b_112b_449d_a774_24ca1a5e91ec.slice/crio-19b100534576dad157575d3bef7d6d561c90df9faca0da29eef2346b44e42324 WatchSource:0}: Error finding container 19b100534576dad157575d3bef7d6d561c90df9faca0da29eef2346b44e42324: Status 404 returned error can't find the container with id 19b100534576dad157575d3bef7d6d561c90df9faca0da29eef2346b44e42324 Mar 18 13:25:31.737220 master-0 kubenswrapper[28504]: I0318 13:25:31.737116 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-pdh89" event={"ID":"5456569b-112b-449d-a774-24ca1a5e91ec","Type":"ContainerStarted","Data":"19b100534576dad157575d3bef7d6d561c90df9faca0da29eef2346b44e42324"} Mar 18 13:25:31.742006 master-0 kubenswrapper[28504]: I0318 13:25:31.741658 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86cfd4f585-tfs7z" event={"ID":"fe6cd387-db28-4db0-b933-ba58fcaf8f24","Type":"ContainerStarted","Data":"bec036adef1fe13fb0495bdb2412957e76bd8c007c09ebfc6df820d5eece5300"} Mar 18 13:25:31.746156 master-0 kubenswrapper[28504]: I0318 13:25:31.745836 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f8d98648f-9x5n4" event={"ID":"5b35ff9a-60ec-41d3-977e-c2fca74c16e0","Type":"ContainerStarted","Data":"93659e5ef1fa2e68bb3c2b0208fcfc9b4b9bcad5d5a4c0d5bc016ba657186120"} Mar 18 13:25:31.766967 master-0 kubenswrapper[28504]: I0318 13:25:31.766407 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-86cfd4f585-tfs7z" podStartSLOduration=2.553502888 podStartE2EDuration="8.766386825s" podCreationTimestamp="2026-03-18 13:25:23 +0000 UTC" firstStartedPulling="2026-03-18 13:25:24.420274454 +0000 UTC m=+101.915080229" lastFinishedPulling="2026-03-18 13:25:30.633158391 +0000 UTC m=+108.127964166" observedRunningTime="2026-03-18 13:25:31.765769998 +0000 UTC m=+109.260575783" watchObservedRunningTime="2026-03-18 13:25:31.766386825 +0000 UTC m=+109.261192600" Mar 18 13:25:32.022117 master-0 kubenswrapper[28504]: I0318 13:25:32.021904 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f8d98648f-9x5n4" podStartSLOduration=3.5096737129999998 podStartE2EDuration="12.021881502s" podCreationTimestamp="2026-03-18 13:25:20 +0000 UTC" firstStartedPulling="2026-03-18 13:25:22.090827603 +0000 UTC m=+99.585633378" lastFinishedPulling="2026-03-18 13:25:30.603035392 +0000 UTC m=+108.097841167" observedRunningTime="2026-03-18 13:25:32.001097505 +0000 UTC m=+109.495903290" watchObservedRunningTime="2026-03-18 13:25:32.021881502 +0000 UTC m=+109.516687277" Mar 18 13:25:32.024716 master-0 kubenswrapper[28504]: I0318 13:25:32.024684 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 13:25:32.024921 master-0 kubenswrapper[28504]: I0318 13:25:32.024886 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-5-master-0" podUID="a495b154-7a49-4e26-b6cf-421686c986ff" containerName="installer" containerID="cri-o://098a3e0f51fd0f111e80c507c8f6935665f0782f2d362df32b723bf95eedc501" gracePeriod=30 Mar 18 13:25:33.472996 master-0 kubenswrapper[28504]: I0318 13:25:33.472596 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:33.472996 master-0 kubenswrapper[28504]: I0318 13:25:33.472669 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:25:33.474996 master-0 kubenswrapper[28504]: I0318 13:25:33.474970 28504 patch_prober.go:28] interesting pod/console-86cfd4f585-tfs7z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 18 13:25:33.475073 master-0 kubenswrapper[28504]: I0318 13:25:33.475013 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86cfd4f585-tfs7z" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 18 13:25:33.775244 master-0 kubenswrapper[28504]: I0318 13:25:33.774415 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-pdh89" event={"ID":"5456569b-112b-449d-a774-24ca1a5e91ec","Type":"ContainerStarted","Data":"2e8881215f41fd818d4b8458f8068e0a5108ac09745f80e097a0aedf49136b8b"} Mar 18 13:25:33.806230 master-0 kubenswrapper[28504]: I0318 13:25:33.806117 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-7c6b76c555-pdh89" podStartSLOduration=3.151061905 podStartE2EDuration="4.806091806s" podCreationTimestamp="2026-03-18 13:25:29 +0000 UTC" firstStartedPulling="2026-03-18 13:25:30.985400687 +0000 UTC m=+108.480206472" lastFinishedPulling="2026-03-18 13:25:32.640430598 +0000 UTC m=+110.135236373" observedRunningTime="2026-03-18 13:25:33.797529494 +0000 UTC m=+111.292335279" watchObservedRunningTime="2026-03-18 13:25:33.806091806 +0000 UTC m=+111.300897581" Mar 18 13:25:35.137156 master-0 kubenswrapper[28504]: I0318 13:25:35.136715 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 18 13:25:35.138357 master-0 kubenswrapper[28504]: I0318 13:25:35.137737 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 13:25:35.153006 master-0 kubenswrapper[28504]: I0318 13:25:35.152928 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 18 13:25:35.226976 master-0 kubenswrapper[28504]: I0318 13:25:35.226734 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-kube-api-access\") pod \"installer-6-master-0\" (UID: \"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 13:25:35.226976 master-0 kubenswrapper[28504]: I0318 13:25:35.226882 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-var-lock\") pod \"installer-6-master-0\" (UID: \"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 13:25:35.226976 master-0 kubenswrapper[28504]: I0318 13:25:35.226902 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 13:25:35.328475 master-0 kubenswrapper[28504]: I0318 13:25:35.328179 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-var-lock\") pod \"installer-6-master-0\" (UID: \"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 13:25:35.328475 master-0 kubenswrapper[28504]: I0318 13:25:35.328312 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-var-lock\") pod \"installer-6-master-0\" (UID: \"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 13:25:35.328475 master-0 kubenswrapper[28504]: I0318 13:25:35.328406 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 13:25:35.328986 master-0 kubenswrapper[28504]: I0318 13:25:35.328600 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 13:25:35.328986 master-0 kubenswrapper[28504]: I0318 13:25:35.328700 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-kube-api-access\") pod \"installer-6-master-0\" (UID: \"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 13:25:35.346456 master-0 kubenswrapper[28504]: I0318 13:25:35.346361 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-kube-api-access\") pod \"installer-6-master-0\" (UID: \"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 13:25:35.477523 master-0 kubenswrapper[28504]: I0318 13:25:35.477484 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 13:25:36.308425 master-0 kubenswrapper[28504]: I0318 13:25:36.307300 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 18 13:25:36.799518 master-0 kubenswrapper[28504]: I0318 13:25:36.799435 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef","Type":"ContainerStarted","Data":"f45b7ebcd1fce97eb746518b9f7af0c2a6691e04403e4c43500af8ea88b9aca6"} Mar 18 13:25:36.799518 master-0 kubenswrapper[28504]: I0318 13:25:36.799507 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef","Type":"ContainerStarted","Data":"8ecaeaf2f4afdd70a7f61b8faadfc341341560cf04634b52a0d0c83003bfb235"} Mar 18 13:25:41.277695 master-0 kubenswrapper[28504]: I0318 13:25:41.277634 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:41.277695 master-0 kubenswrapper[28504]: I0318 13:25:41.277692 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:25:41.282262 master-0 kubenswrapper[28504]: I0318 13:25:41.282177 28504 patch_prober.go:28] interesting pod/console-f8d98648f-9x5n4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 13:25:41.282262 master-0 kubenswrapper[28504]: I0318 13:25:41.282235 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f8d98648f-9x5n4" podUID="5b35ff9a-60ec-41d3-977e-c2fca74c16e0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 13:25:43.468827 master-0 kubenswrapper[28504]: I0318 13:25:43.468728 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-0" podStartSLOduration=8.468703435 podStartE2EDuration="8.468703435s" podCreationTimestamp="2026-03-18 13:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:25:36.824029259 +0000 UTC m=+114.318835054" watchObservedRunningTime="2026-03-18 13:25:43.468703435 +0000 UTC m=+120.963509230" Mar 18 13:25:43.472237 master-0 kubenswrapper[28504]: I0318 13:25:43.471766 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f8d98648f-9x5n4"] Mar 18 13:25:43.472321 master-0 kubenswrapper[28504]: I0318 13:25:43.472269 28504 patch_prober.go:28] interesting pod/console-86cfd4f585-tfs7z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 18 13:25:43.472372 master-0 kubenswrapper[28504]: I0318 13:25:43.472321 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86cfd4f585-tfs7z" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 18 13:25:44.525028 master-0 kubenswrapper[28504]: I0318 13:25:44.524246 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7486c568bf-jngmz"] Mar 18 13:25:44.530994 master-0 kubenswrapper[28504]: I0318 13:25:44.530817 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.565352 master-0 kubenswrapper[28504]: I0318 13:25:44.565295 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7486c568bf-jngmz"] Mar 18 13:25:44.668828 master-0 kubenswrapper[28504]: I0318 13:25:44.668761 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-service-ca\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.668828 master-0 kubenswrapper[28504]: I0318 13:25:44.668838 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-oauth-serving-cert\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.669143 master-0 kubenswrapper[28504]: I0318 13:25:44.668900 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-trusted-ca-bundle\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.669143 master-0 kubenswrapper[28504]: I0318 13:25:44.668961 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-serving-cert\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.669143 master-0 kubenswrapper[28504]: I0318 13:25:44.668988 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjw6d\" (UniqueName: \"kubernetes.io/projected/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-kube-api-access-cjw6d\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.669143 master-0 kubenswrapper[28504]: I0318 13:25:44.669006 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-oauth-config\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.669143 master-0 kubenswrapper[28504]: I0318 13:25:44.669039 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-config\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.771130 master-0 kubenswrapper[28504]: I0318 13:25:44.770617 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-serving-cert\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.771130 master-0 kubenswrapper[28504]: I0318 13:25:44.770679 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjw6d\" (UniqueName: \"kubernetes.io/projected/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-kube-api-access-cjw6d\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.771130 master-0 kubenswrapper[28504]: I0318 13:25:44.770709 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-oauth-config\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.771130 master-0 kubenswrapper[28504]: I0318 13:25:44.770758 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-config\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.771130 master-0 kubenswrapper[28504]: I0318 13:25:44.770814 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-service-ca\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.771130 master-0 kubenswrapper[28504]: I0318 13:25:44.771013 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-oauth-serving-cert\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.771499 master-0 kubenswrapper[28504]: I0318 13:25:44.771161 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-trusted-ca-bundle\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.772482 master-0 kubenswrapper[28504]: I0318 13:25:44.772441 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-config\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.772562 master-0 kubenswrapper[28504]: I0318 13:25:44.772449 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-oauth-serving-cert\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.772724 master-0 kubenswrapper[28504]: I0318 13:25:44.772684 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-trusted-ca-bundle\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.776484 master-0 kubenswrapper[28504]: I0318 13:25:44.776397 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-oauth-config\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.778786 master-0 kubenswrapper[28504]: I0318 13:25:44.778734 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-serving-cert\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.779517 master-0 kubenswrapper[28504]: I0318 13:25:44.779445 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-service-ca\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.789390 master-0 kubenswrapper[28504]: I0318 13:25:44.789302 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjw6d\" (UniqueName: \"kubernetes.io/projected/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-kube-api-access-cjw6d\") pod \"console-7486c568bf-jngmz\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:44.872685 master-0 kubenswrapper[28504]: I0318 13:25:44.872588 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:25:50.534361 master-0 kubenswrapper[28504]: E0318 13:25:50.534248 28504 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-poda495b154_7a49_4e26_b6cf_421686c986ff.slice/crio-conmon-098a3e0f51fd0f111e80c507c8f6935665f0782f2d362df32b723bf95eedc501.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-poda495b154_7a49_4e26_b6cf_421686c986ff.slice/crio-098a3e0f51fd0f111e80c507c8f6935665f0782f2d362df32b723bf95eedc501.scope\": RecentStats: unable to find data in memory cache]" Mar 18 13:25:52.456278 master-0 kubenswrapper[28504]: I0318 13:25:52.456145 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6c4f65fbf4-78m99"] Mar 18 13:25:53.650259 master-0 kubenswrapper[28504]: I0318 13:25:53.649317 28504 patch_prober.go:28] interesting pod/console-86cfd4f585-tfs7z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 18 13:25:53.650259 master-0 kubenswrapper[28504]: I0318 13:25:53.649404 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86cfd4f585-tfs7z" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 18 13:26:03.473686 master-0 kubenswrapper[28504]: I0318 13:26:03.473460 28504 patch_prober.go:28] interesting pod/console-86cfd4f585-tfs7z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 18 13:26:03.473686 master-0 kubenswrapper[28504]: I0318 13:26:03.473625 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86cfd4f585-tfs7z" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 18 13:26:06.052203 master-0 kubenswrapper[28504]: I0318 13:26:06.050847 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_a495b154-7a49-4e26-b6cf-421686c986ff/installer/0.log" Mar 18 13:26:06.052203 master-0 kubenswrapper[28504]: I0318 13:26:06.050923 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:26:06.172933 master-0 kubenswrapper[28504]: I0318 13:26:06.172807 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7486c568bf-jngmz"] Mar 18 13:26:06.181631 master-0 kubenswrapper[28504]: W0318 13:26:06.181525 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4022ee9_babb_4dc3_a486_ddbab9fa8c16.slice/crio-3fabef9d0629883821c407cd40b7b792db02f7a31181978179677a6ce6565f15 WatchSource:0}: Error finding container 3fabef9d0629883821c407cd40b7b792db02f7a31181978179677a6ce6565f15: Status 404 returned error can't find the container with id 3fabef9d0629883821c407cd40b7b792db02f7a31181978179677a6ce6565f15 Mar 18 13:26:06.234811 master-0 kubenswrapper[28504]: I0318 13:26:06.234713 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a495b154-7a49-4e26-b6cf-421686c986ff-kube-api-access\") pod \"a495b154-7a49-4e26-b6cf-421686c986ff\" (UID: \"a495b154-7a49-4e26-b6cf-421686c986ff\") " Mar 18 13:26:06.234811 master-0 kubenswrapper[28504]: I0318 13:26:06.234819 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a495b154-7a49-4e26-b6cf-421686c986ff-kubelet-dir\") pod \"a495b154-7a49-4e26-b6cf-421686c986ff\" (UID: \"a495b154-7a49-4e26-b6cf-421686c986ff\") " Mar 18 13:26:06.235172 master-0 kubenswrapper[28504]: I0318 13:26:06.234891 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a495b154-7a49-4e26-b6cf-421686c986ff-var-lock\") pod \"a495b154-7a49-4e26-b6cf-421686c986ff\" (UID: \"a495b154-7a49-4e26-b6cf-421686c986ff\") " Mar 18 13:26:06.235323 master-0 kubenswrapper[28504]: I0318 13:26:06.235287 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a495b154-7a49-4e26-b6cf-421686c986ff-var-lock" (OuterVolumeSpecName: "var-lock") pod "a495b154-7a49-4e26-b6cf-421686c986ff" (UID: "a495b154-7a49-4e26-b6cf-421686c986ff"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:26:06.235378 master-0 kubenswrapper[28504]: I0318 13:26:06.235297 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a495b154-7a49-4e26-b6cf-421686c986ff-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a495b154-7a49-4e26-b6cf-421686c986ff" (UID: "a495b154-7a49-4e26-b6cf-421686c986ff"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:26:06.238776 master-0 kubenswrapper[28504]: I0318 13:26:06.238427 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a495b154-7a49-4e26-b6cf-421686c986ff-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a495b154-7a49-4e26-b6cf-421686c986ff" (UID: "a495b154-7a49-4e26-b6cf-421686c986ff"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:26:06.337407 master-0 kubenswrapper[28504]: I0318 13:26:06.337314 28504 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a495b154-7a49-4e26-b6cf-421686c986ff-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:06.337407 master-0 kubenswrapper[28504]: I0318 13:26:06.337381 28504 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a495b154-7a49-4e26-b6cf-421686c986ff-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:06.337407 master-0 kubenswrapper[28504]: I0318 13:26:06.337392 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a495b154-7a49-4e26-b6cf-421686c986ff-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:06.355800 master-0 kubenswrapper[28504]: I0318 13:26:06.355732 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_a495b154-7a49-4e26-b6cf-421686c986ff/installer/0.log" Mar 18 13:26:06.355800 master-0 kubenswrapper[28504]: I0318 13:26:06.355797 28504 generic.go:334] "Generic (PLEG): container finished" podID="a495b154-7a49-4e26-b6cf-421686c986ff" containerID="098a3e0f51fd0f111e80c507c8f6935665f0782f2d362df32b723bf95eedc501" exitCode=1 Mar 18 13:26:06.356274 master-0 kubenswrapper[28504]: I0318 13:26:06.355855 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"a495b154-7a49-4e26-b6cf-421686c986ff","Type":"ContainerDied","Data":"098a3e0f51fd0f111e80c507c8f6935665f0782f2d362df32b723bf95eedc501"} Mar 18 13:26:06.356274 master-0 kubenswrapper[28504]: I0318 13:26:06.355887 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"a495b154-7a49-4e26-b6cf-421686c986ff","Type":"ContainerDied","Data":"ebed4b008dd25291b94a2a7eb9066f946cf66f82207f5f070c5e9b03e42311bf"} Mar 18 13:26:06.356274 master-0 kubenswrapper[28504]: I0318 13:26:06.355909 28504 scope.go:117] "RemoveContainer" containerID="098a3e0f51fd0f111e80c507c8f6935665f0782f2d362df32b723bf95eedc501" Mar 18 13:26:06.356274 master-0 kubenswrapper[28504]: I0318 13:26:06.356066 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:26:06.361568 master-0 kubenswrapper[28504]: I0318 13:26:06.361449 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-crvh7" event={"ID":"2cf62b58-2c1c-4187-8fca-1a60b51a1783","Type":"ContainerStarted","Data":"7110e5a33ef3664db03798663729b29e007d4d8683e8102958966a2418e4fa96"} Mar 18 13:26:06.361645 master-0 kubenswrapper[28504]: I0318 13:26:06.361574 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-66b8ffb895-crvh7" Mar 18 13:26:06.364374 master-0 kubenswrapper[28504]: I0318 13:26:06.364313 28504 patch_prober.go:28] interesting pod/downloads-66b8ffb895-crvh7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" start-of-body= Mar 18 13:26:06.364475 master-0 kubenswrapper[28504]: I0318 13:26:06.364408 28504 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-crvh7" podUID="2cf62b58-2c1c-4187-8fca-1a60b51a1783" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" Mar 18 13:26:06.364475 master-0 kubenswrapper[28504]: I0318 13:26:06.364422 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7486c568bf-jngmz" event={"ID":"e4022ee9-babb-4dc3-a486-ddbab9fa8c16","Type":"ContainerStarted","Data":"74af20829ff8291f0caf615baa5467cf4fd08fe1cab2d2e9f4abb6418bbb6be0"} Mar 18 13:26:06.364475 master-0 kubenswrapper[28504]: I0318 13:26:06.364462 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7486c568bf-jngmz" event={"ID":"e4022ee9-babb-4dc3-a486-ddbab9fa8c16","Type":"ContainerStarted","Data":"3fabef9d0629883821c407cd40b7b792db02f7a31181978179677a6ce6565f15"} Mar 18 13:26:06.386924 master-0 kubenswrapper[28504]: I0318 13:26:06.386840 28504 scope.go:117] "RemoveContainer" containerID="098a3e0f51fd0f111e80c507c8f6935665f0782f2d362df32b723bf95eedc501" Mar 18 13:26:06.387928 master-0 kubenswrapper[28504]: E0318 13:26:06.387805 28504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"098a3e0f51fd0f111e80c507c8f6935665f0782f2d362df32b723bf95eedc501\": container with ID starting with 098a3e0f51fd0f111e80c507c8f6935665f0782f2d362df32b723bf95eedc501 not found: ID does not exist" containerID="098a3e0f51fd0f111e80c507c8f6935665f0782f2d362df32b723bf95eedc501" Mar 18 13:26:06.387928 master-0 kubenswrapper[28504]: I0318 13:26:06.387858 28504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"098a3e0f51fd0f111e80c507c8f6935665f0782f2d362df32b723bf95eedc501"} err="failed to get container status \"098a3e0f51fd0f111e80c507c8f6935665f0782f2d362df32b723bf95eedc501\": rpc error: code = NotFound desc = could not find container \"098a3e0f51fd0f111e80c507c8f6935665f0782f2d362df32b723bf95eedc501\": container with ID starting with 098a3e0f51fd0f111e80c507c8f6935665f0782f2d362df32b723bf95eedc501 not found: ID does not exist" Mar 18 13:26:06.521965 master-0 kubenswrapper[28504]: I0318 13:26:06.521858 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-66b8ffb895-crvh7" podStartSLOduration=1.7981729290000001 podStartE2EDuration="55.52183368s" podCreationTimestamp="2026-03-18 13:25:11 +0000 UTC" firstStartedPulling="2026-03-18 13:25:12.119398083 +0000 UTC m=+89.614203858" lastFinishedPulling="2026-03-18 13:26:05.843058824 +0000 UTC m=+143.337864609" observedRunningTime="2026-03-18 13:26:06.521688465 +0000 UTC m=+144.016494250" watchObservedRunningTime="2026-03-18 13:26:06.52183368 +0000 UTC m=+144.016639455" Mar 18 13:26:06.784324 master-0 kubenswrapper[28504]: I0318 13:26:06.784158 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7486c568bf-jngmz" podStartSLOduration=22.784135038 podStartE2EDuration="22.784135038s" podCreationTimestamp="2026-03-18 13:25:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:26:06.783359056 +0000 UTC m=+144.278164831" watchObservedRunningTime="2026-03-18 13:26:06.784135038 +0000 UTC m=+144.278940813" Mar 18 13:26:07.158902 master-0 kubenswrapper[28504]: I0318 13:26:07.158734 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 13:26:07.378272 master-0 kubenswrapper[28504]: I0318 13:26:07.378191 28504 patch_prober.go:28] interesting pod/downloads-66b8ffb895-crvh7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" start-of-body= Mar 18 13:26:07.378518 master-0 kubenswrapper[28504]: I0318 13:26:07.378276 28504 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-crvh7" podUID="2cf62b58-2c1c-4187-8fca-1a60b51a1783" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" Mar 18 13:26:07.498752 master-0 kubenswrapper[28504]: I0318 13:26:07.498681 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 13:26:08.511981 master-0 kubenswrapper[28504]: I0318 13:26:08.511888 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f8d98648f-9x5n4" podUID="5b35ff9a-60ec-41d3-977e-c2fca74c16e0" containerName="console" containerID="cri-o://93659e5ef1fa2e68bb3c2b0208fcfc9b4b9bcad5d5a4c0d5bc016ba657186120" gracePeriod=15 Mar 18 13:26:08.759511 master-0 kubenswrapper[28504]: I0318 13:26:08.759437 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a495b154-7a49-4e26-b6cf-421686c986ff" path="/var/lib/kubelet/pods/a495b154-7a49-4e26-b6cf-421686c986ff/volumes" Mar 18 13:26:09.956021 master-0 kubenswrapper[28504]: I0318 13:26:09.390762 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f8d98648f-9x5n4_5b35ff9a-60ec-41d3-977e-c2fca74c16e0/console/0.log" Mar 18 13:26:09.956021 master-0 kubenswrapper[28504]: I0318 13:26:09.390831 28504 generic.go:334] "Generic (PLEG): container finished" podID="5b35ff9a-60ec-41d3-977e-c2fca74c16e0" containerID="93659e5ef1fa2e68bb3c2b0208fcfc9b4b9bcad5d5a4c0d5bc016ba657186120" exitCode=2 Mar 18 13:26:09.956021 master-0 kubenswrapper[28504]: I0318 13:26:09.390899 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f8d98648f-9x5n4" event={"ID":"5b35ff9a-60ec-41d3-977e-c2fca74c16e0","Type":"ContainerDied","Data":"93659e5ef1fa2e68bb3c2b0208fcfc9b4b9bcad5d5a4c0d5bc016ba657186120"} Mar 18 13:26:11.196158 master-0 kubenswrapper[28504]: I0318 13:26:11.196075 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f8d98648f-9x5n4_5b35ff9a-60ec-41d3-977e-c2fca74c16e0/console/0.log" Mar 18 13:26:11.196696 master-0 kubenswrapper[28504]: I0318 13:26:11.196188 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:26:11.229161 master-0 kubenswrapper[28504]: I0318 13:26:11.229051 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-oauth-config\") pod \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " Mar 18 13:26:11.229161 master-0 kubenswrapper[28504]: I0318 13:26:11.229143 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-oauth-serving-cert\") pod \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " Mar 18 13:26:11.229161 master-0 kubenswrapper[28504]: I0318 13:26:11.229179 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hvtg\" (UniqueName: \"kubernetes.io/projected/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-kube-api-access-6hvtg\") pod \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " Mar 18 13:26:11.229511 master-0 kubenswrapper[28504]: I0318 13:26:11.229205 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-service-ca\") pod \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " Mar 18 13:26:11.229511 master-0 kubenswrapper[28504]: I0318 13:26:11.229226 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-serving-cert\") pod \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " Mar 18 13:26:11.229511 master-0 kubenswrapper[28504]: I0318 13:26:11.229260 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-config\") pod \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\" (UID: \"5b35ff9a-60ec-41d3-977e-c2fca74c16e0\") " Mar 18 13:26:11.231043 master-0 kubenswrapper[28504]: I0318 13:26:11.230987 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-service-ca" (OuterVolumeSpecName: "service-ca") pod "5b35ff9a-60ec-41d3-977e-c2fca74c16e0" (UID: "5b35ff9a-60ec-41d3-977e-c2fca74c16e0"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:26:11.231268 master-0 kubenswrapper[28504]: I0318 13:26:11.231236 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-config" (OuterVolumeSpecName: "console-config") pod "5b35ff9a-60ec-41d3-977e-c2fca74c16e0" (UID: "5b35ff9a-60ec-41d3-977e-c2fca74c16e0"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:26:11.231383 master-0 kubenswrapper[28504]: I0318 13:26:11.231350 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5b35ff9a-60ec-41d3-977e-c2fca74c16e0" (UID: "5b35ff9a-60ec-41d3-977e-c2fca74c16e0"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:26:11.233264 master-0 kubenswrapper[28504]: I0318 13:26:11.233193 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5b35ff9a-60ec-41d3-977e-c2fca74c16e0" (UID: "5b35ff9a-60ec-41d3-977e-c2fca74c16e0"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:26:11.233360 master-0 kubenswrapper[28504]: I0318 13:26:11.233229 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-kube-api-access-6hvtg" (OuterVolumeSpecName: "kube-api-access-6hvtg") pod "5b35ff9a-60ec-41d3-977e-c2fca74c16e0" (UID: "5b35ff9a-60ec-41d3-977e-c2fca74c16e0"). InnerVolumeSpecName "kube-api-access-6hvtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:26:11.233522 master-0 kubenswrapper[28504]: I0318 13:26:11.233461 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5b35ff9a-60ec-41d3-977e-c2fca74c16e0" (UID: "5b35ff9a-60ec-41d3-977e-c2fca74c16e0"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:26:11.332038 master-0 kubenswrapper[28504]: I0318 13:26:11.331879 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hvtg\" (UniqueName: \"kubernetes.io/projected/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-kube-api-access-6hvtg\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:11.332038 master-0 kubenswrapper[28504]: I0318 13:26:11.331952 28504 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:11.332038 master-0 kubenswrapper[28504]: I0318 13:26:11.331991 28504 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:11.332038 master-0 kubenswrapper[28504]: I0318 13:26:11.332021 28504 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:11.332038 master-0 kubenswrapper[28504]: I0318 13:26:11.332042 28504 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:11.332038 master-0 kubenswrapper[28504]: I0318 13:26:11.332059 28504 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5b35ff9a-60ec-41d3-977e-c2fca74c16e0-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:11.412354 master-0 kubenswrapper[28504]: I0318 13:26:11.412280 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f8d98648f-9x5n4_5b35ff9a-60ec-41d3-977e-c2fca74c16e0/console/0.log" Mar 18 13:26:11.412791 master-0 kubenswrapper[28504]: I0318 13:26:11.412382 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f8d98648f-9x5n4" event={"ID":"5b35ff9a-60ec-41d3-977e-c2fca74c16e0","Type":"ContainerDied","Data":"3c7ab017383fc797c23b30b1bb737974173ea769320a830f56cccb6ac068f9f7"} Mar 18 13:26:11.412791 master-0 kubenswrapper[28504]: I0318 13:26:11.412471 28504 scope.go:117] "RemoveContainer" containerID="93659e5ef1fa2e68bb3c2b0208fcfc9b4b9bcad5d5a4c0d5bc016ba657186120" Mar 18 13:26:11.412791 master-0 kubenswrapper[28504]: I0318 13:26:11.412746 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f8d98648f-9x5n4" Mar 18 13:26:11.576362 master-0 kubenswrapper[28504]: I0318 13:26:11.576202 28504 patch_prober.go:28] interesting pod/downloads-66b8ffb895-crvh7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" start-of-body= Mar 18 13:26:11.576362 master-0 kubenswrapper[28504]: I0318 13:26:11.576231 28504 patch_prober.go:28] interesting pod/downloads-66b8ffb895-crvh7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" start-of-body= Mar 18 13:26:11.576362 master-0 kubenswrapper[28504]: I0318 13:26:11.576271 28504 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-crvh7" podUID="2cf62b58-2c1c-4187-8fca-1a60b51a1783" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" Mar 18 13:26:11.577454 master-0 kubenswrapper[28504]: I0318 13:26:11.576279 28504 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-66b8ffb895-crvh7" podUID="2cf62b58-2c1c-4187-8fca-1a60b51a1783" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" Mar 18 13:26:13.472715 master-0 kubenswrapper[28504]: I0318 13:26:13.472636 28504 patch_prober.go:28] interesting pod/console-86cfd4f585-tfs7z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 18 13:26:13.472715 master-0 kubenswrapper[28504]: I0318 13:26:13.472703 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86cfd4f585-tfs7z" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 18 13:26:14.648443 master-0 kubenswrapper[28504]: I0318 13:26:14.648366 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f8d98648f-9x5n4"] Mar 18 13:26:14.668984 master-0 kubenswrapper[28504]: I0318 13:26:14.668894 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f8d98648f-9x5n4"] Mar 18 13:26:14.760057 master-0 kubenswrapper[28504]: I0318 13:26:14.759405 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b35ff9a-60ec-41d3-977e-c2fca74c16e0" path="/var/lib/kubelet/pods/5b35ff9a-60ec-41d3-977e-c2fca74c16e0/volumes" Mar 18 13:26:14.874059 master-0 kubenswrapper[28504]: I0318 13:26:14.873975 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:26:14.875620 master-0 kubenswrapper[28504]: I0318 13:26:14.875545 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:26:14.876702 master-0 kubenswrapper[28504]: I0318 13:26:14.876644 28504 patch_prober.go:28] interesting pod/console-7486c568bf-jngmz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Mar 18 13:26:14.876804 master-0 kubenswrapper[28504]: I0318 13:26:14.876721 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7486c568bf-jngmz" podUID="e4022ee9-babb-4dc3-a486-ddbab9fa8c16" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Mar 18 13:26:17.483909 master-0 kubenswrapper[28504]: I0318 13:26:17.483808 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" podUID="436fad70-517b-4375-ac49-77829a6969de" containerName="oauth-openshift" containerID="cri-o://f0a2722d1309cf82ebe9b74c6a5664dd8abcb6d18e7e2ec265add83a45d29b08" gracePeriod=15 Mar 18 13:26:18.467452 master-0 kubenswrapper[28504]: I0318 13:26:18.467388 28504 generic.go:334] "Generic (PLEG): container finished" podID="436fad70-517b-4375-ac49-77829a6969de" containerID="f0a2722d1309cf82ebe9b74c6a5664dd8abcb6d18e7e2ec265add83a45d29b08" exitCode=0 Mar 18 13:26:18.467452 master-0 kubenswrapper[28504]: I0318 13:26:18.467439 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" event={"ID":"436fad70-517b-4375-ac49-77829a6969de","Type":"ContainerDied","Data":"f0a2722d1309cf82ebe9b74c6a5664dd8abcb6d18e7e2ec265add83a45d29b08"} Mar 18 13:26:19.356807 master-0 kubenswrapper[28504]: I0318 13:26:19.356761 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:26:19.476270 master-0 kubenswrapper[28504]: I0318 13:26:19.476205 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" event={"ID":"436fad70-517b-4375-ac49-77829a6969de","Type":"ContainerDied","Data":"02aaf6f550f51aaa40a24cc5df57348ee35c9dfa40627854293c9e1f22ccf267"} Mar 18 13:26:19.476270 master-0 kubenswrapper[28504]: I0318 13:26:19.476272 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c4f65fbf4-78m99" Mar 18 13:26:19.476618 master-0 kubenswrapper[28504]: I0318 13:26:19.476275 28504 scope.go:117] "RemoveContainer" containerID="f0a2722d1309cf82ebe9b74c6a5664dd8abcb6d18e7e2ec265add83a45d29b08" Mar 18 13:26:19.531290 master-0 kubenswrapper[28504]: I0318 13:26:19.531207 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-audit-policies\") pod \"436fad70-517b-4375-ac49-77829a6969de\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " Mar 18 13:26:19.531564 master-0 kubenswrapper[28504]: I0318 13:26:19.531546 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-login\") pod \"436fad70-517b-4375-ac49-77829a6969de\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " Mar 18 13:26:19.531673 master-0 kubenswrapper[28504]: I0318 13:26:19.531661 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-service-ca\") pod \"436fad70-517b-4375-ac49-77829a6969de\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " Mar 18 13:26:19.531768 master-0 kubenswrapper[28504]: I0318 13:26:19.531756 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-router-certs\") pod \"436fad70-517b-4375-ac49-77829a6969de\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " Mar 18 13:26:19.531862 master-0 kubenswrapper[28504]: I0318 13:26:19.531850 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-serving-cert\") pod \"436fad70-517b-4375-ac49-77829a6969de\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " Mar 18 13:26:19.531995 master-0 kubenswrapper[28504]: I0318 13:26:19.531780 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "436fad70-517b-4375-ac49-77829a6969de" (UID: "436fad70-517b-4375-ac49-77829a6969de"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:26:19.532052 master-0 kubenswrapper[28504]: I0318 13:26:19.531971 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-session\") pod \"436fad70-517b-4375-ac49-77829a6969de\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " Mar 18 13:26:19.532086 master-0 kubenswrapper[28504]: I0318 13:26:19.532062 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-ocp-branding-template\") pod \"436fad70-517b-4375-ac49-77829a6969de\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " Mar 18 13:26:19.532153 master-0 kubenswrapper[28504]: I0318 13:26:19.532132 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/436fad70-517b-4375-ac49-77829a6969de-audit-dir\") pod \"436fad70-517b-4375-ac49-77829a6969de\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " Mar 18 13:26:19.532216 master-0 kubenswrapper[28504]: I0318 13:26:19.532195 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-error\") pod \"436fad70-517b-4375-ac49-77829a6969de\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " Mar 18 13:26:19.532259 master-0 kubenswrapper[28504]: I0318 13:26:19.532221 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "436fad70-517b-4375-ac49-77829a6969de" (UID: "436fad70-517b-4375-ac49-77829a6969de"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:26:19.532322 master-0 kubenswrapper[28504]: I0318 13:26:19.532257 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/436fad70-517b-4375-ac49-77829a6969de-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "436fad70-517b-4375-ac49-77829a6969de" (UID: "436fad70-517b-4375-ac49-77829a6969de"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:26:19.532492 master-0 kubenswrapper[28504]: I0318 13:26:19.532462 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-trusted-ca-bundle\") pod \"436fad70-517b-4375-ac49-77829a6969de\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " Mar 18 13:26:19.533638 master-0 kubenswrapper[28504]: I0318 13:26:19.532537 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-provider-selection\") pod \"436fad70-517b-4375-ac49-77829a6969de\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " Mar 18 13:26:19.533638 master-0 kubenswrapper[28504]: I0318 13:26:19.532583 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzpwf\" (UniqueName: \"kubernetes.io/projected/436fad70-517b-4375-ac49-77829a6969de-kube-api-access-qzpwf\") pod \"436fad70-517b-4375-ac49-77829a6969de\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " Mar 18 13:26:19.533638 master-0 kubenswrapper[28504]: I0318 13:26:19.532628 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-cliconfig\") pod \"436fad70-517b-4375-ac49-77829a6969de\" (UID: \"436fad70-517b-4375-ac49-77829a6969de\") " Mar 18 13:26:19.533638 master-0 kubenswrapper[28504]: I0318 13:26:19.533124 28504 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:19.533638 master-0 kubenswrapper[28504]: I0318 13:26:19.533146 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:19.533638 master-0 kubenswrapper[28504]: I0318 13:26:19.533160 28504 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/436fad70-517b-4375-ac49-77829a6969de-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:19.533638 master-0 kubenswrapper[28504]: I0318 13:26:19.533285 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "436fad70-517b-4375-ac49-77829a6969de" (UID: "436fad70-517b-4375-ac49-77829a6969de"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:26:19.533638 master-0 kubenswrapper[28504]: I0318 13:26:19.533516 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "436fad70-517b-4375-ac49-77829a6969de" (UID: "436fad70-517b-4375-ac49-77829a6969de"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:26:19.535051 master-0 kubenswrapper[28504]: I0318 13:26:19.534977 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "436fad70-517b-4375-ac49-77829a6969de" (UID: "436fad70-517b-4375-ac49-77829a6969de"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:26:19.535051 master-0 kubenswrapper[28504]: I0318 13:26:19.535010 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "436fad70-517b-4375-ac49-77829a6969de" (UID: "436fad70-517b-4375-ac49-77829a6969de"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:26:19.535371 master-0 kubenswrapper[28504]: I0318 13:26:19.535150 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "436fad70-517b-4375-ac49-77829a6969de" (UID: "436fad70-517b-4375-ac49-77829a6969de"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:26:19.535792 master-0 kubenswrapper[28504]: I0318 13:26:19.535647 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/436fad70-517b-4375-ac49-77829a6969de-kube-api-access-qzpwf" (OuterVolumeSpecName: "kube-api-access-qzpwf") pod "436fad70-517b-4375-ac49-77829a6969de" (UID: "436fad70-517b-4375-ac49-77829a6969de"). InnerVolumeSpecName "kube-api-access-qzpwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:26:19.535792 master-0 kubenswrapper[28504]: I0318 13:26:19.535649 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "436fad70-517b-4375-ac49-77829a6969de" (UID: "436fad70-517b-4375-ac49-77829a6969de"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:26:19.535904 master-0 kubenswrapper[28504]: I0318 13:26:19.535780 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "436fad70-517b-4375-ac49-77829a6969de" (UID: "436fad70-517b-4375-ac49-77829a6969de"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:26:19.535904 master-0 kubenswrapper[28504]: I0318 13:26:19.535859 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "436fad70-517b-4375-ac49-77829a6969de" (UID: "436fad70-517b-4375-ac49-77829a6969de"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:26:19.537056 master-0 kubenswrapper[28504]: I0318 13:26:19.537027 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "436fad70-517b-4375-ac49-77829a6969de" (UID: "436fad70-517b-4375-ac49-77829a6969de"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:26:19.634537 master-0 kubenswrapper[28504]: I0318 13:26:19.634417 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:19.634537 master-0 kubenswrapper[28504]: I0318 13:26:19.634469 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:19.634537 master-0 kubenswrapper[28504]: I0318 13:26:19.634483 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzpwf\" (UniqueName: \"kubernetes.io/projected/436fad70-517b-4375-ac49-77829a6969de-kube-api-access-qzpwf\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:19.634537 master-0 kubenswrapper[28504]: I0318 13:26:19.634496 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:19.634537 master-0 kubenswrapper[28504]: I0318 13:26:19.634506 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:19.634537 master-0 kubenswrapper[28504]: I0318 13:26:19.634517 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:19.634537 master-0 kubenswrapper[28504]: I0318 13:26:19.634529 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:19.634537 master-0 kubenswrapper[28504]: I0318 13:26:19.634542 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:19.635048 master-0 kubenswrapper[28504]: I0318 13:26:19.634556 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:19.635048 master-0 kubenswrapper[28504]: I0318 13:26:19.634570 28504 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/436fad70-517b-4375-ac49-77829a6969de-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:19.644870 master-0 kubenswrapper[28504]: I0318 13:26:19.644808 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-67d599f9d6-s5drj"] Mar 18 13:26:19.645171 master-0 kubenswrapper[28504]: E0318 13:26:19.645145 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b35ff9a-60ec-41d3-977e-c2fca74c16e0" containerName="console" Mar 18 13:26:19.645171 master-0 kubenswrapper[28504]: I0318 13:26:19.645168 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b35ff9a-60ec-41d3-977e-c2fca74c16e0" containerName="console" Mar 18 13:26:19.645238 master-0 kubenswrapper[28504]: E0318 13:26:19.645186 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a495b154-7a49-4e26-b6cf-421686c986ff" containerName="installer" Mar 18 13:26:19.645238 master-0 kubenswrapper[28504]: I0318 13:26:19.645195 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="a495b154-7a49-4e26-b6cf-421686c986ff" containerName="installer" Mar 18 13:26:19.645238 master-0 kubenswrapper[28504]: E0318 13:26:19.645231 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="436fad70-517b-4375-ac49-77829a6969de" containerName="oauth-openshift" Mar 18 13:26:19.645331 master-0 kubenswrapper[28504]: I0318 13:26:19.645244 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="436fad70-517b-4375-ac49-77829a6969de" containerName="oauth-openshift" Mar 18 13:26:19.645482 master-0 kubenswrapper[28504]: I0318 13:26:19.645458 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="436fad70-517b-4375-ac49-77829a6969de" containerName="oauth-openshift" Mar 18 13:26:19.645517 master-0 kubenswrapper[28504]: I0318 13:26:19.645483 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b35ff9a-60ec-41d3-977e-c2fca74c16e0" containerName="console" Mar 18 13:26:19.645517 master-0 kubenswrapper[28504]: I0318 13:26:19.645507 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="a495b154-7a49-4e26-b6cf-421686c986ff" containerName="installer" Mar 18 13:26:19.646300 master-0 kubenswrapper[28504]: I0318 13:26:19.646270 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.736245 master-0 kubenswrapper[28504]: I0318 13:26:19.736186 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7qxs\" (UniqueName: \"kubernetes.io/projected/1951681b-a335-4cae-8006-202d4cdb5b96-kube-api-access-h7qxs\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.736536 master-0 kubenswrapper[28504]: I0318 13:26:19.736516 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-cliconfig\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.736633 master-0 kubenswrapper[28504]: I0318 13:26:19.736620 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1951681b-a335-4cae-8006-202d4cdb5b96-audit-dir\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.736721 master-0 kubenswrapper[28504]: I0318 13:26:19.736709 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-user-template-error\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.736845 master-0 kubenswrapper[28504]: I0318 13:26:19.736823 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.736974 master-0 kubenswrapper[28504]: I0318 13:26:19.736938 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-service-ca\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.737163 master-0 kubenswrapper[28504]: I0318 13:26:19.737146 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1951681b-a335-4cae-8006-202d4cdb5b96-audit-policies\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.737286 master-0 kubenswrapper[28504]: I0318 13:26:19.737266 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-router-certs\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.737401 master-0 kubenswrapper[28504]: I0318 13:26:19.737387 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-user-template-login\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.737606 master-0 kubenswrapper[28504]: I0318 13:26:19.737548 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.737672 master-0 kubenswrapper[28504]: I0318 13:26:19.737610 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.737672 master-0 kubenswrapper[28504]: I0318 13:26:19.737654 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-session\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.737755 master-0 kubenswrapper[28504]: I0318 13:26:19.737700 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-serving-cert\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.769001 master-0 kubenswrapper[28504]: I0318 13:26:19.768956 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-67d599f9d6-s5drj"] Mar 18 13:26:19.839310 master-0 kubenswrapper[28504]: I0318 13:26:19.839238 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.839310 master-0 kubenswrapper[28504]: I0318 13:26:19.839304 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.840393 master-0 kubenswrapper[28504]: I0318 13:26:19.840202 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-session\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.840531 master-0 kubenswrapper[28504]: I0318 13:26:19.840502 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-serving-cert\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.840593 master-0 kubenswrapper[28504]: I0318 13:26:19.840576 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7qxs\" (UniqueName: \"kubernetes.io/projected/1951681b-a335-4cae-8006-202d4cdb5b96-kube-api-access-h7qxs\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.840729 master-0 kubenswrapper[28504]: I0318 13:26:19.840636 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-cliconfig\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.840799 master-0 kubenswrapper[28504]: I0318 13:26:19.840760 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1951681b-a335-4cae-8006-202d4cdb5b96-audit-dir\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.840799 master-0 kubenswrapper[28504]: I0318 13:26:19.840785 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-user-template-error\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.841694 master-0 kubenswrapper[28504]: I0318 13:26:19.840811 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.841694 master-0 kubenswrapper[28504]: I0318 13:26:19.840833 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-service-ca\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.841694 master-0 kubenswrapper[28504]: I0318 13:26:19.840901 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1951681b-a335-4cae-8006-202d4cdb5b96-audit-policies\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.841694 master-0 kubenswrapper[28504]: I0318 13:26:19.841075 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-router-certs\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.841694 master-0 kubenswrapper[28504]: I0318 13:26:19.841527 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.841956 master-0 kubenswrapper[28504]: I0318 13:26:19.841748 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1951681b-a335-4cae-8006-202d4cdb5b96-audit-policies\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.843115 master-0 kubenswrapper[28504]: I0318 13:26:19.842710 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-user-template-login\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.843359 master-0 kubenswrapper[28504]: I0318 13:26:19.843287 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-session\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.844063 master-0 kubenswrapper[28504]: I0318 13:26:19.844017 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.844333 master-0 kubenswrapper[28504]: I0318 13:26:19.844247 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1951681b-a335-4cae-8006-202d4cdb5b96-audit-dir\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.844386 master-0 kubenswrapper[28504]: I0318 13:26:19.844323 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-cliconfig\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.844766 master-0 kubenswrapper[28504]: I0318 13:26:19.844746 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-service-ca\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.845117 master-0 kubenswrapper[28504]: I0318 13:26:19.845053 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-router-certs\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.846070 master-0 kubenswrapper[28504]: I0318 13:26:19.846007 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-system-serving-cert\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.846147 master-0 kubenswrapper[28504]: I0318 13:26:19.846049 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.846380 master-0 kubenswrapper[28504]: I0318 13:26:19.846344 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-user-template-login\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.847683 master-0 kubenswrapper[28504]: I0318 13:26:19.847649 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1951681b-a335-4cae-8006-202d4cdb5b96-v4-0-config-user-template-error\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.902534 master-0 kubenswrapper[28504]: I0318 13:26:19.902016 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7qxs\" (UniqueName: \"kubernetes.io/projected/1951681b-a335-4cae-8006-202d4cdb5b96-kube-api-access-h7qxs\") pod \"oauth-openshift-67d599f9d6-s5drj\" (UID: \"1951681b-a335-4cae-8006-202d4cdb5b96\") " pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:19.983562 master-0 kubenswrapper[28504]: I0318 13:26:19.983467 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:20.817115 master-0 kubenswrapper[28504]: I0318 13:26:20.817032 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6c4f65fbf4-78m99"] Mar 18 13:26:21.062286 master-0 kubenswrapper[28504]: I0318 13:26:21.062222 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-6c4f65fbf4-78m99"] Mar 18 13:26:21.198997 master-0 kubenswrapper[28504]: I0318 13:26:21.198875 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-67d599f9d6-s5drj"] Mar 18 13:26:21.290289 master-0 kubenswrapper[28504]: W0318 13:26:21.290230 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1951681b_a335_4cae_8006_202d4cdb5b96.slice/crio-f6b3ca5f52ddf7059462d0367a4bf780590edc40cc8681cb1f4d47c3730fb341 WatchSource:0}: Error finding container f6b3ca5f52ddf7059462d0367a4bf780590edc40cc8681cb1f4d47c3730fb341: Status 404 returned error can't find the container with id f6b3ca5f52ddf7059462d0367a4bf780590edc40cc8681cb1f4d47c3730fb341 Mar 18 13:26:22.256808 master-0 kubenswrapper[28504]: I0318 13:26:22.256101 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-66b8ffb895-crvh7" Mar 18 13:26:22.259085 master-0 kubenswrapper[28504]: I0318 13:26:22.259018 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" event={"ID":"1951681b-a335-4cae-8006-202d4cdb5b96","Type":"ContainerStarted","Data":"f6b3ca5f52ddf7059462d0367a4bf780590edc40cc8681cb1f4d47c3730fb341"} Mar 18 13:26:22.922151 master-0 kubenswrapper[28504]: I0318 13:26:22.760029 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="436fad70-517b-4375-ac49-77829a6969de" path="/var/lib/kubelet/pods/436fad70-517b-4375-ac49-77829a6969de/volumes" Mar 18 13:26:23.266979 master-0 kubenswrapper[28504]: I0318 13:26:23.266907 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" event={"ID":"1951681b-a335-4cae-8006-202d4cdb5b96","Type":"ContainerStarted","Data":"bff40d2fe3dd6aa1d588de6f0ca097cc054b6adbf7433d2b90b9812709ba1d2b"} Mar 18 13:26:23.267481 master-0 kubenswrapper[28504]: I0318 13:26:23.267235 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:23.473125 master-0 kubenswrapper[28504]: I0318 13:26:23.473075 28504 patch_prober.go:28] interesting pod/console-86cfd4f585-tfs7z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 18 13:26:23.473498 master-0 kubenswrapper[28504]: I0318 13:26:23.473458 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86cfd4f585-tfs7z" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 18 13:26:24.267660 master-0 kubenswrapper[28504]: I0318 13:26:24.267590 28504 patch_prober.go:28] interesting pod/oauth-openshift-67d599f9d6-s5drj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.102:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:26:24.268529 master-0 kubenswrapper[28504]: I0318 13:26:24.267677 28504 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" podUID="1951681b-a335-4cae-8006-202d4cdb5b96" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.102:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:26:24.874524 master-0 kubenswrapper[28504]: I0318 13:26:24.874382 28504 patch_prober.go:28] interesting pod/console-7486c568bf-jngmz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Mar 18 13:26:24.874524 master-0 kubenswrapper[28504]: I0318 13:26:24.874472 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7486c568bf-jngmz" podUID="e4022ee9-babb-4dc3-a486-ddbab9fa8c16" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Mar 18 13:26:25.275471 master-0 kubenswrapper[28504]: I0318 13:26:25.275381 28504 patch_prober.go:28] interesting pod/oauth-openshift-67d599f9d6-s5drj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.102:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:26:25.276234 master-0 kubenswrapper[28504]: I0318 13:26:25.275490 28504 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" podUID="1951681b-a335-4cae-8006-202d4cdb5b96" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.102:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:26:28.707479 master-0 kubenswrapper[28504]: I0318 13:26:28.707387 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" podStartSLOduration=36.707367044 podStartE2EDuration="36.707367044s" podCreationTimestamp="2026-03-18 13:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:26:28.706611692 +0000 UTC m=+166.201417477" watchObservedRunningTime="2026-03-18 13:26:28.707367044 +0000 UTC m=+166.202172819" Mar 18 13:26:29.989381 master-0 kubenswrapper[28504]: I0318 13:26:29.989290 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-67d599f9d6-s5drj" Mar 18 13:26:33.473161 master-0 kubenswrapper[28504]: I0318 13:26:33.473085 28504 patch_prober.go:28] interesting pod/console-86cfd4f585-tfs7z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 18 13:26:33.473764 master-0 kubenswrapper[28504]: I0318 13:26:33.473171 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86cfd4f585-tfs7z" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 18 13:26:34.873637 master-0 kubenswrapper[28504]: I0318 13:26:34.873581 28504 patch_prober.go:28] interesting pod/console-7486c568bf-jngmz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Mar 18 13:26:34.874266 master-0 kubenswrapper[28504]: I0318 13:26:34.873654 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7486c568bf-jngmz" podUID="e4022ee9-babb-4dc3-a486-ddbab9fa8c16" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Mar 18 13:26:43.472163 master-0 kubenswrapper[28504]: I0318 13:26:43.472095 28504 patch_prober.go:28] interesting pod/console-86cfd4f585-tfs7z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 18 13:26:43.472794 master-0 kubenswrapper[28504]: I0318 13:26:43.472164 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86cfd4f585-tfs7z" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 18 13:26:44.874435 master-0 kubenswrapper[28504]: I0318 13:26:44.874356 28504 patch_prober.go:28] interesting pod/console-7486c568bf-jngmz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Mar 18 13:26:44.874986 master-0 kubenswrapper[28504]: I0318 13:26:44.874460 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7486c568bf-jngmz" podUID="e4022ee9-babb-4dc3-a486-ddbab9fa8c16" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Mar 18 13:26:45.409226 master-0 kubenswrapper[28504]: I0318 13:26:45.409159 28504 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:26:45.410133 master-0 kubenswrapper[28504]: I0318 13:26:45.410111 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.410442 master-0 kubenswrapper[28504]: E0318 13:26:45.410391 28504 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Mar 18 13:26:45.410708 master-0 kubenswrapper[28504]: I0318 13:26:45.410671 28504 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:26:45.412147 master-0 kubenswrapper[28504]: I0318 13:26:45.412091 28504 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:26:45.412404 master-0 kubenswrapper[28504]: E0318 13:26:45.412386 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 13:26:45.412404 master-0 kubenswrapper[28504]: I0318 13:26:45.412403 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 13:26:45.412505 master-0 kubenswrapper[28504]: E0318 13:26:45.412418 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 18 13:26:45.412505 master-0 kubenswrapper[28504]: I0318 13:26:45.412425 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 18 13:26:45.412505 master-0 kubenswrapper[28504]: E0318 13:26:45.412445 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 18 13:26:45.412505 master-0 kubenswrapper[28504]: I0318 13:26:45.412452 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 18 13:26:45.412505 master-0 kubenswrapper[28504]: E0318 13:26:45.412458 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 18 13:26:45.412505 master-0 kubenswrapper[28504]: I0318 13:26:45.412464 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 18 13:26:45.412505 master-0 kubenswrapper[28504]: E0318 13:26:45.412475 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="setup" Mar 18 13:26:45.412505 master-0 kubenswrapper[28504]: I0318 13:26:45.412480 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="setup" Mar 18 13:26:45.412505 master-0 kubenswrapper[28504]: E0318 13:26:45.412490 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 13:26:45.412505 master-0 kubenswrapper[28504]: I0318 13:26:45.412497 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 13:26:45.412786 master-0 kubenswrapper[28504]: I0318 13:26:45.412649 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 18 13:26:45.412786 master-0 kubenswrapper[28504]: I0318 13:26:45.412663 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 13:26:45.412786 master-0 kubenswrapper[28504]: I0318 13:26:45.412670 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 13:26:45.412786 master-0 kubenswrapper[28504]: I0318 13:26:45.412681 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 18 13:26:45.412786 master-0 kubenswrapper[28504]: I0318 13:26:45.412701 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 18 13:26:45.412786 master-0 kubenswrapper[28504]: I0318 13:26:45.412710 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 13:26:45.413236 master-0 kubenswrapper[28504]: E0318 13:26:45.412839 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 13:26:45.413236 master-0 kubenswrapper[28504]: I0318 13:26:45.412852 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 13:26:45.444602 master-0 kubenswrapper[28504]: I0318 13:26:45.444538 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:26:45.473815 master-0 kubenswrapper[28504]: I0318 13:26:45.473741 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" containerID="cri-o://2b80d49adc9d2f83dbbd9220b14172f3ab4f4c6ab2a4e2ce774916e196c3bafb" gracePeriod=15 Mar 18 13:26:45.474077 master-0 kubenswrapper[28504]: I0318 13:26:45.473822 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://2dbddceb4f62a187377fac19cce981e7d88a91f87a33640abf8fa581a60868d7" gracePeriod=15 Mar 18 13:26:45.474077 master-0 kubenswrapper[28504]: I0318 13:26:45.473760 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" containerID="cri-o://3cf046bac848fd9b98ca4687c11ebc4dab0ff9457003912da71140fe9d182292" gracePeriod=15 Mar 18 13:26:45.474077 master-0 kubenswrapper[28504]: I0318 13:26:45.473985 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://72a069687b3c9ef62bab13abc5527b84fd02758e1d6cc5b1b45a4026fa01b58a" gracePeriod=15 Mar 18 13:26:45.474077 master-0 kubenswrapper[28504]: I0318 13:26:45.473913 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" containerID="cri-o://377a83a18fc12c3c2467c48fc4fffce0e3c49653a95ea0cf4d2345011a573703" gracePeriod=15 Mar 18 13:26:45.487677 master-0 kubenswrapper[28504]: I0318 13:26:45.487608 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.487811 master-0 kubenswrapper[28504]: I0318 13:26:45.487703 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.487811 master-0 kubenswrapper[28504]: I0318 13:26:45.487736 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:45.487811 master-0 kubenswrapper[28504]: I0318 13:26:45.487770 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:45.487992 master-0 kubenswrapper[28504]: I0318 13:26:45.487826 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:45.487992 master-0 kubenswrapper[28504]: I0318 13:26:45.487857 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.487992 master-0 kubenswrapper[28504]: I0318 13:26:45.487879 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.487992 master-0 kubenswrapper[28504]: I0318 13:26:45.487910 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.591274 master-0 kubenswrapper[28504]: I0318 13:26:45.590627 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:45.591274 master-0 kubenswrapper[28504]: I0318 13:26:45.590709 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:45.591274 master-0 kubenswrapper[28504]: I0318 13:26:45.590735 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.591274 master-0 kubenswrapper[28504]: I0318 13:26:45.590756 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.591274 master-0 kubenswrapper[28504]: I0318 13:26:45.590780 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.591274 master-0 kubenswrapper[28504]: I0318 13:26:45.590817 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.591274 master-0 kubenswrapper[28504]: I0318 13:26:45.590848 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:45.591274 master-0 kubenswrapper[28504]: I0318 13:26:45.590863 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.591274 master-0 kubenswrapper[28504]: I0318 13:26:45.590951 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.591274 master-0 kubenswrapper[28504]: I0318 13:26:45.590998 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:45.591274 master-0 kubenswrapper[28504]: I0318 13:26:45.591044 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:45.591274 master-0 kubenswrapper[28504]: I0318 13:26:45.591087 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.591274 master-0 kubenswrapper[28504]: I0318 13:26:45.591133 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.591274 master-0 kubenswrapper[28504]: I0318 13:26:45.591163 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.591274 master-0 kubenswrapper[28504]: I0318 13:26:45.591192 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.591274 master-0 kubenswrapper[28504]: I0318 13:26:45.591221 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:45.741490 master-0 kubenswrapper[28504]: I0318 13:26:45.741423 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.763643 master-0 kubenswrapper[28504]: W0318 13:26:45.763589 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebbfbf2b56df0323ba118d68bfdad8b9.slice/crio-c482e5955f2b50f674aa7f2ce41feb366e515a6ec7d111d7c9660cc533421751 WatchSource:0}: Error finding container c482e5955f2b50f674aa7f2ce41feb366e515a6ec7d111d7c9660cc533421751: Status 404 returned error can't find the container with id c482e5955f2b50f674aa7f2ce41feb366e515a6ec7d111d7c9660cc533421751 Mar 18 13:26:45.766869 master-0 kubenswrapper[28504]: E0318 13:26:45.766678 28504 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189df27369e5bade openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:ebbfbf2b56df0323ba118d68bfdad8b9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:26:45.76566755 +0000 UTC m=+183.260473325,LastTimestamp:2026-03-18 13:26:45.76566755 +0000 UTC m=+183.260473325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:26:46.419863 master-0 kubenswrapper[28504]: E0318 13:26:46.419614 28504 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189df27369e5bade openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:ebbfbf2b56df0323ba118d68bfdad8b9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:26:45.76566755 +0000 UTC m=+183.260473325,LastTimestamp:2026-03-18 13:26:45.76566755 +0000 UTC m=+183.260473325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:26:46.483569 master-0 kubenswrapper[28504]: I0318 13:26:46.483506 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-check-endpoints/0.log" Mar 18 13:26:46.484924 master-0 kubenswrapper[28504]: I0318 13:26:46.484884 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 18 13:26:46.485605 master-0 kubenswrapper[28504]: I0318 13:26:46.485563 28504 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="3cf046bac848fd9b98ca4687c11ebc4dab0ff9457003912da71140fe9d182292" exitCode=0 Mar 18 13:26:46.485605 master-0 kubenswrapper[28504]: I0318 13:26:46.485596 28504 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="72a069687b3c9ef62bab13abc5527b84fd02758e1d6cc5b1b45a4026fa01b58a" exitCode=0 Mar 18 13:26:46.485681 master-0 kubenswrapper[28504]: I0318 13:26:46.485607 28504 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="2dbddceb4f62a187377fac19cce981e7d88a91f87a33640abf8fa581a60868d7" exitCode=0 Mar 18 13:26:46.485681 master-0 kubenswrapper[28504]: I0318 13:26:46.485617 28504 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="377a83a18fc12c3c2467c48fc4fffce0e3c49653a95ea0cf4d2345011a573703" exitCode=2 Mar 18 13:26:46.485765 master-0 kubenswrapper[28504]: I0318 13:26:46.485681 28504 scope.go:117] "RemoveContainer" containerID="f8b0391a9dd6a8a76a315386f50081873095d6505ee1824ca4cf57436b5940a3" Mar 18 13:26:46.487537 master-0 kubenswrapper[28504]: I0318 13:26:46.487339 28504 generic.go:334] "Generic (PLEG): container finished" podID="e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef" containerID="f45b7ebcd1fce97eb746518b9f7af0c2a6691e04403e4c43500af8ea88b9aca6" exitCode=0 Mar 18 13:26:46.487537 master-0 kubenswrapper[28504]: I0318 13:26:46.487479 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef","Type":"ContainerDied","Data":"f45b7ebcd1fce97eb746518b9f7af0c2a6691e04403e4c43500af8ea88b9aca6"} Mar 18 13:26:46.488815 master-0 kubenswrapper[28504]: I0318 13:26:46.488713 28504 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:46.489068 master-0 kubenswrapper[28504]: I0318 13:26:46.488871 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebbfbf2b56df0323ba118d68bfdad8b9","Type":"ContainerStarted","Data":"9a74794e79a5d7f1d9afced9179d8e5d2989ffe0e330d539282156c93a3d6a96"} Mar 18 13:26:46.489068 master-0 kubenswrapper[28504]: I0318 13:26:46.488920 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebbfbf2b56df0323ba118d68bfdad8b9","Type":"ContainerStarted","Data":"c482e5955f2b50f674aa7f2ce41feb366e515a6ec7d111d7c9660cc533421751"} Mar 18 13:26:46.489455 master-0 kubenswrapper[28504]: I0318 13:26:46.489409 28504 status_manager.go:851] "Failed to get status for pod" podUID="e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:46.489981 master-0 kubenswrapper[28504]: I0318 13:26:46.489922 28504 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:46.491133 master-0 kubenswrapper[28504]: I0318 13:26:46.490647 28504 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:46.491310 master-0 kubenswrapper[28504]: I0318 13:26:46.491171 28504 status_manager.go:851] "Failed to get status for pod" podUID="e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:46.491787 master-0 kubenswrapper[28504]: I0318 13:26:46.491740 28504 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:47.498828 master-0 kubenswrapper[28504]: I0318 13:26:47.498764 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 18 13:26:47.584993 master-0 kubenswrapper[28504]: E0318 13:26:47.584795 28504 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:47.586434 master-0 kubenswrapper[28504]: E0318 13:26:47.586375 28504 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:47.587253 master-0 kubenswrapper[28504]: E0318 13:26:47.587200 28504 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:47.587785 master-0 kubenswrapper[28504]: E0318 13:26:47.587752 28504 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:47.588275 master-0 kubenswrapper[28504]: E0318 13:26:47.588227 28504 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:47.588340 master-0 kubenswrapper[28504]: I0318 13:26:47.588279 28504 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 13:26:47.588809 master-0 kubenswrapper[28504]: E0318 13:26:47.588774 28504 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 18 13:26:47.794120 master-0 kubenswrapper[28504]: E0318 13:26:47.790689 28504 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 18 13:26:47.935774 master-0 kubenswrapper[28504]: I0318 13:26:47.935718 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 13:26:47.936835 master-0 kubenswrapper[28504]: I0318 13:26:47.936783 28504 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:47.937726 master-0 kubenswrapper[28504]: I0318 13:26:47.937691 28504 status_manager.go:851] "Failed to get status for pod" podUID="e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:47.942608 master-0 kubenswrapper[28504]: I0318 13:26:47.942576 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 18 13:26:47.943506 master-0 kubenswrapper[28504]: I0318 13:26:47.943485 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:47.944681 master-0 kubenswrapper[28504]: I0318 13:26:47.944621 28504 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:47.945522 master-0 kubenswrapper[28504]: I0318 13:26:47.945478 28504 status_manager.go:851] "Failed to get status for pod" podUID="e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:47.946216 master-0 kubenswrapper[28504]: I0318 13:26:47.946144 28504 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:48.129251 master-0 kubenswrapper[28504]: I0318 13:26:48.129083 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-kube-api-access\") pod \"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef\" (UID: \"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef\") " Mar 18 13:26:48.129441 master-0 kubenswrapper[28504]: I0318 13:26:48.129280 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 18 13:26:48.129441 master-0 kubenswrapper[28504]: I0318 13:26:48.129397 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 18 13:26:48.129539 master-0 kubenswrapper[28504]: I0318 13:26:48.129521 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 18 13:26:48.129573 master-0 kubenswrapper[28504]: I0318 13:26:48.129551 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-var-lock\") pod \"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef\" (UID: \"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef\") " Mar 18 13:26:48.129605 master-0 kubenswrapper[28504]: I0318 13:26:48.129547 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:26:48.129635 master-0 kubenswrapper[28504]: I0318 13:26:48.129598 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:26:48.129681 master-0 kubenswrapper[28504]: I0318 13:26:48.129636 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-var-lock" (OuterVolumeSpecName: "var-lock") pod "e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef" (UID: "e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:26:48.129681 master-0 kubenswrapper[28504]: I0318 13:26:48.129588 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:26:48.129681 master-0 kubenswrapper[28504]: I0318 13:26:48.129585 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-kubelet-dir\") pod \"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef\" (UID: \"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef\") " Mar 18 13:26:48.129797 master-0 kubenswrapper[28504]: I0318 13:26:48.129652 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef" (UID: "e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:26:48.130273 master-0 kubenswrapper[28504]: I0318 13:26:48.130233 28504 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:48.130273 master-0 kubenswrapper[28504]: I0318 13:26:48.130264 28504 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:48.130367 master-0 kubenswrapper[28504]: I0318 13:26:48.130276 28504 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:48.130367 master-0 kubenswrapper[28504]: I0318 13:26:48.130291 28504 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:48.130367 master-0 kubenswrapper[28504]: I0318 13:26:48.130303 28504 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:48.132356 master-0 kubenswrapper[28504]: I0318 13:26:48.132304 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef" (UID: "e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:26:48.192840 master-0 kubenswrapper[28504]: E0318 13:26:48.192769 28504 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 18 13:26:48.232212 master-0 kubenswrapper[28504]: I0318 13:26:48.232121 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:48.511822 master-0 kubenswrapper[28504]: I0318 13:26:48.511764 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 18 13:26:48.512878 master-0 kubenswrapper[28504]: I0318 13:26:48.512760 28504 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="2b80d49adc9d2f83dbbd9220b14172f3ab4f4c6ab2a4e2ce774916e196c3bafb" exitCode=0 Mar 18 13:26:48.512878 master-0 kubenswrapper[28504]: I0318 13:26:48.512856 28504 scope.go:117] "RemoveContainer" containerID="3cf046bac848fd9b98ca4687c11ebc4dab0ff9457003912da71140fe9d182292" Mar 18 13:26:48.513084 master-0 kubenswrapper[28504]: I0318 13:26:48.512865 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:48.515691 master-0 kubenswrapper[28504]: I0318 13:26:48.515470 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef","Type":"ContainerDied","Data":"8ecaeaf2f4afdd70a7f61b8faadfc341341560cf04634b52a0d0c83003bfb235"} Mar 18 13:26:48.515691 master-0 kubenswrapper[28504]: I0318 13:26:48.515509 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ecaeaf2f4afdd70a7f61b8faadfc341341560cf04634b52a0d0c83003bfb235" Mar 18 13:26:48.515691 master-0 kubenswrapper[28504]: I0318 13:26:48.515546 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 13:26:48.528915 master-0 kubenswrapper[28504]: I0318 13:26:48.528875 28504 scope.go:117] "RemoveContainer" containerID="72a069687b3c9ef62bab13abc5527b84fd02758e1d6cc5b1b45a4026fa01b58a" Mar 18 13:26:48.538441 master-0 kubenswrapper[28504]: I0318 13:26:48.538385 28504 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:48.539031 master-0 kubenswrapper[28504]: I0318 13:26:48.538981 28504 status_manager.go:851] "Failed to get status for pod" podUID="e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:48.539736 master-0 kubenswrapper[28504]: I0318 13:26:48.539702 28504 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:48.540865 master-0 kubenswrapper[28504]: I0318 13:26:48.540811 28504 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:48.541277 master-0 kubenswrapper[28504]: I0318 13:26:48.541239 28504 status_manager.go:851] "Failed to get status for pod" podUID="e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:48.541724 master-0 kubenswrapper[28504]: I0318 13:26:48.541651 28504 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:48.551686 master-0 kubenswrapper[28504]: I0318 13:26:48.551637 28504 scope.go:117] "RemoveContainer" containerID="2dbddceb4f62a187377fac19cce981e7d88a91f87a33640abf8fa581a60868d7" Mar 18 13:26:48.566481 master-0 kubenswrapper[28504]: I0318 13:26:48.566429 28504 scope.go:117] "RemoveContainer" containerID="377a83a18fc12c3c2467c48fc4fffce0e3c49653a95ea0cf4d2345011a573703" Mar 18 13:26:48.583465 master-0 kubenswrapper[28504]: I0318 13:26:48.583367 28504 scope.go:117] "RemoveContainer" containerID="2b80d49adc9d2f83dbbd9220b14172f3ab4f4c6ab2a4e2ce774916e196c3bafb" Mar 18 13:26:48.605311 master-0 kubenswrapper[28504]: I0318 13:26:48.605173 28504 scope.go:117] "RemoveContainer" containerID="e8c91df65ad2a5d542d44c3e9b72528a23b6488107d0629d4bcdb9c771675658" Mar 18 13:26:48.624658 master-0 kubenswrapper[28504]: I0318 13:26:48.624596 28504 scope.go:117] "RemoveContainer" containerID="3cf046bac848fd9b98ca4687c11ebc4dab0ff9457003912da71140fe9d182292" Mar 18 13:26:48.625325 master-0 kubenswrapper[28504]: E0318 13:26:48.625273 28504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cf046bac848fd9b98ca4687c11ebc4dab0ff9457003912da71140fe9d182292\": container with ID starting with 3cf046bac848fd9b98ca4687c11ebc4dab0ff9457003912da71140fe9d182292 not found: ID does not exist" containerID="3cf046bac848fd9b98ca4687c11ebc4dab0ff9457003912da71140fe9d182292" Mar 18 13:26:48.626117 master-0 kubenswrapper[28504]: I0318 13:26:48.625332 28504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cf046bac848fd9b98ca4687c11ebc4dab0ff9457003912da71140fe9d182292"} err="failed to get container status \"3cf046bac848fd9b98ca4687c11ebc4dab0ff9457003912da71140fe9d182292\": rpc error: code = NotFound desc = could not find container \"3cf046bac848fd9b98ca4687c11ebc4dab0ff9457003912da71140fe9d182292\": container with ID starting with 3cf046bac848fd9b98ca4687c11ebc4dab0ff9457003912da71140fe9d182292 not found: ID does not exist" Mar 18 13:26:48.626117 master-0 kubenswrapper[28504]: I0318 13:26:48.625369 28504 scope.go:117] "RemoveContainer" containerID="72a069687b3c9ef62bab13abc5527b84fd02758e1d6cc5b1b45a4026fa01b58a" Mar 18 13:26:48.627183 master-0 kubenswrapper[28504]: E0318 13:26:48.627123 28504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72a069687b3c9ef62bab13abc5527b84fd02758e1d6cc5b1b45a4026fa01b58a\": container with ID starting with 72a069687b3c9ef62bab13abc5527b84fd02758e1d6cc5b1b45a4026fa01b58a not found: ID does not exist" containerID="72a069687b3c9ef62bab13abc5527b84fd02758e1d6cc5b1b45a4026fa01b58a" Mar 18 13:26:48.627320 master-0 kubenswrapper[28504]: I0318 13:26:48.627261 28504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a069687b3c9ef62bab13abc5527b84fd02758e1d6cc5b1b45a4026fa01b58a"} err="failed to get container status \"72a069687b3c9ef62bab13abc5527b84fd02758e1d6cc5b1b45a4026fa01b58a\": rpc error: code = NotFound desc = could not find container \"72a069687b3c9ef62bab13abc5527b84fd02758e1d6cc5b1b45a4026fa01b58a\": container with ID starting with 72a069687b3c9ef62bab13abc5527b84fd02758e1d6cc5b1b45a4026fa01b58a not found: ID does not exist" Mar 18 13:26:48.627320 master-0 kubenswrapper[28504]: I0318 13:26:48.627310 28504 scope.go:117] "RemoveContainer" containerID="2dbddceb4f62a187377fac19cce981e7d88a91f87a33640abf8fa581a60868d7" Mar 18 13:26:48.627782 master-0 kubenswrapper[28504]: E0318 13:26:48.627733 28504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dbddceb4f62a187377fac19cce981e7d88a91f87a33640abf8fa581a60868d7\": container with ID starting with 2dbddceb4f62a187377fac19cce981e7d88a91f87a33640abf8fa581a60868d7 not found: ID does not exist" containerID="2dbddceb4f62a187377fac19cce981e7d88a91f87a33640abf8fa581a60868d7" Mar 18 13:26:48.627837 master-0 kubenswrapper[28504]: I0318 13:26:48.627784 28504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dbddceb4f62a187377fac19cce981e7d88a91f87a33640abf8fa581a60868d7"} err="failed to get container status \"2dbddceb4f62a187377fac19cce981e7d88a91f87a33640abf8fa581a60868d7\": rpc error: code = NotFound desc = could not find container \"2dbddceb4f62a187377fac19cce981e7d88a91f87a33640abf8fa581a60868d7\": container with ID starting with 2dbddceb4f62a187377fac19cce981e7d88a91f87a33640abf8fa581a60868d7 not found: ID does not exist" Mar 18 13:26:48.627837 master-0 kubenswrapper[28504]: I0318 13:26:48.627819 28504 scope.go:117] "RemoveContainer" containerID="377a83a18fc12c3c2467c48fc4fffce0e3c49653a95ea0cf4d2345011a573703" Mar 18 13:26:48.628212 master-0 kubenswrapper[28504]: E0318 13:26:48.628157 28504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"377a83a18fc12c3c2467c48fc4fffce0e3c49653a95ea0cf4d2345011a573703\": container with ID starting with 377a83a18fc12c3c2467c48fc4fffce0e3c49653a95ea0cf4d2345011a573703 not found: ID does not exist" containerID="377a83a18fc12c3c2467c48fc4fffce0e3c49653a95ea0cf4d2345011a573703" Mar 18 13:26:48.628285 master-0 kubenswrapper[28504]: I0318 13:26:48.628208 28504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"377a83a18fc12c3c2467c48fc4fffce0e3c49653a95ea0cf4d2345011a573703"} err="failed to get container status \"377a83a18fc12c3c2467c48fc4fffce0e3c49653a95ea0cf4d2345011a573703\": rpc error: code = NotFound desc = could not find container \"377a83a18fc12c3c2467c48fc4fffce0e3c49653a95ea0cf4d2345011a573703\": container with ID starting with 377a83a18fc12c3c2467c48fc4fffce0e3c49653a95ea0cf4d2345011a573703 not found: ID does not exist" Mar 18 13:26:48.628285 master-0 kubenswrapper[28504]: I0318 13:26:48.628248 28504 scope.go:117] "RemoveContainer" containerID="2b80d49adc9d2f83dbbd9220b14172f3ab4f4c6ab2a4e2ce774916e196c3bafb" Mar 18 13:26:48.628702 master-0 kubenswrapper[28504]: E0318 13:26:48.628661 28504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b80d49adc9d2f83dbbd9220b14172f3ab4f4c6ab2a4e2ce774916e196c3bafb\": container with ID starting with 2b80d49adc9d2f83dbbd9220b14172f3ab4f4c6ab2a4e2ce774916e196c3bafb not found: ID does not exist" containerID="2b80d49adc9d2f83dbbd9220b14172f3ab4f4c6ab2a4e2ce774916e196c3bafb" Mar 18 13:26:48.628760 master-0 kubenswrapper[28504]: I0318 13:26:48.628698 28504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b80d49adc9d2f83dbbd9220b14172f3ab4f4c6ab2a4e2ce774916e196c3bafb"} err="failed to get container status \"2b80d49adc9d2f83dbbd9220b14172f3ab4f4c6ab2a4e2ce774916e196c3bafb\": rpc error: code = NotFound desc = could not find container \"2b80d49adc9d2f83dbbd9220b14172f3ab4f4c6ab2a4e2ce774916e196c3bafb\": container with ID starting with 2b80d49adc9d2f83dbbd9220b14172f3ab4f4c6ab2a4e2ce774916e196c3bafb not found: ID does not exist" Mar 18 13:26:48.628760 master-0 kubenswrapper[28504]: I0318 13:26:48.628721 28504 scope.go:117] "RemoveContainer" containerID="e8c91df65ad2a5d542d44c3e9b72528a23b6488107d0629d4bcdb9c771675658" Mar 18 13:26:48.630292 master-0 kubenswrapper[28504]: E0318 13:26:48.630240 28504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8c91df65ad2a5d542d44c3e9b72528a23b6488107d0629d4bcdb9c771675658\": container with ID starting with e8c91df65ad2a5d542d44c3e9b72528a23b6488107d0629d4bcdb9c771675658 not found: ID does not exist" containerID="e8c91df65ad2a5d542d44c3e9b72528a23b6488107d0629d4bcdb9c771675658" Mar 18 13:26:48.630377 master-0 kubenswrapper[28504]: I0318 13:26:48.630288 28504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8c91df65ad2a5d542d44c3e9b72528a23b6488107d0629d4bcdb9c771675658"} err="failed to get container status \"e8c91df65ad2a5d542d44c3e9b72528a23b6488107d0629d4bcdb9c771675658\": rpc error: code = NotFound desc = could not find container \"e8c91df65ad2a5d542d44c3e9b72528a23b6488107d0629d4bcdb9c771675658\": container with ID starting with e8c91df65ad2a5d542d44c3e9b72528a23b6488107d0629d4bcdb9c771675658 not found: ID does not exist" Mar 18 13:26:48.761453 master-0 kubenswrapper[28504]: I0318 13:26:48.761397 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" path="/var/lib/kubelet/pods/b45ea2ef1cf2bc9d1d994d6538ae0a64/volumes" Mar 18 13:26:48.994163 master-0 kubenswrapper[28504]: E0318 13:26:48.994106 28504 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 18 13:26:50.595066 master-0 kubenswrapper[28504]: E0318 13:26:50.594998 28504 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 18 13:26:52.754529 master-0 kubenswrapper[28504]: I0318 13:26:52.754465 28504 status_manager.go:851] "Failed to get status for pod" podUID="e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:52.755988 master-0 kubenswrapper[28504]: I0318 13:26:52.755863 28504 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:53.472887 master-0 kubenswrapper[28504]: I0318 13:26:53.472816 28504 patch_prober.go:28] interesting pod/console-86cfd4f585-tfs7z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 18 13:26:53.473096 master-0 kubenswrapper[28504]: I0318 13:26:53.472887 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86cfd4f585-tfs7z" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 18 13:26:53.797307 master-0 kubenswrapper[28504]: E0318 13:26:53.796904 28504 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 18 13:26:54.873643 master-0 kubenswrapper[28504]: I0318 13:26:54.873573 28504 patch_prober.go:28] interesting pod/console-7486c568bf-jngmz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Mar 18 13:26:54.874174 master-0 kubenswrapper[28504]: I0318 13:26:54.873698 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7486c568bf-jngmz" podUID="e4022ee9-babb-4dc3-a486-ddbab9fa8c16" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Mar 18 13:26:56.421215 master-0 kubenswrapper[28504]: E0318 13:26:56.421072 28504 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189df27369e5bade openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:ebbfbf2b56df0323ba118d68bfdad8b9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:26:45.76566755 +0000 UTC m=+183.260473325,LastTimestamp:2026-03-18 13:26:45.76566755 +0000 UTC m=+183.260473325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:26:59.748906 master-0 kubenswrapper[28504]: I0318 13:26:59.748847 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:59.750257 master-0 kubenswrapper[28504]: I0318 13:26:59.750192 28504 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:59.750969 master-0 kubenswrapper[28504]: I0318 13:26:59.750909 28504 status_manager.go:851] "Failed to get status for pod" podUID="e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:26:59.768911 master-0 kubenswrapper[28504]: I0318 13:26:59.768846 28504 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="048bceb5-2b16-4ba6-bea2-c19114476757" Mar 18 13:26:59.768911 master-0 kubenswrapper[28504]: I0318 13:26:59.768895 28504 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="048bceb5-2b16-4ba6-bea2-c19114476757" Mar 18 13:26:59.770100 master-0 kubenswrapper[28504]: E0318 13:26:59.770011 28504 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:59.770624 master-0 kubenswrapper[28504]: I0318 13:26:59.770595 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:59.791009 master-0 kubenswrapper[28504]: W0318 13:26:59.790918 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod274c4bebf95a655851b2cf276fe43ef7.slice/crio-d801c9986c5f61b9ddd0004e512a8ecde6e28e4d2fd4a67849dd007b4c9577c9 WatchSource:0}: Error finding container d801c9986c5f61b9ddd0004e512a8ecde6e28e4d2fd4a67849dd007b4c9577c9: Status 404 returned error can't find the container with id d801c9986c5f61b9ddd0004e512a8ecde6e28e4d2fd4a67849dd007b4c9577c9 Mar 18 13:27:00.199417 master-0 kubenswrapper[28504]: E0318 13:27:00.199154 28504 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Mar 18 13:27:00.603087 master-0 kubenswrapper[28504]: I0318 13:27:00.603042 28504 generic.go:334] "Generic (PLEG): container finished" podID="274c4bebf95a655851b2cf276fe43ef7" containerID="6593c55ebdcf0486307c1da8e7768fae961dfb06d37d9f5e5c0e0fbc228d8141" exitCode=0 Mar 18 13:27:00.603395 master-0 kubenswrapper[28504]: I0318 13:27:00.603073 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerDied","Data":"6593c55ebdcf0486307c1da8e7768fae961dfb06d37d9f5e5c0e0fbc228d8141"} Mar 18 13:27:00.603637 master-0 kubenswrapper[28504]: I0318 13:27:00.603612 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"d801c9986c5f61b9ddd0004e512a8ecde6e28e4d2fd4a67849dd007b4c9577c9"} Mar 18 13:27:00.603947 master-0 kubenswrapper[28504]: I0318 13:27:00.603909 28504 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="048bceb5-2b16-4ba6-bea2-c19114476757" Mar 18 13:27:00.603947 master-0 kubenswrapper[28504]: I0318 13:27:00.603929 28504 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="048bceb5-2b16-4ba6-bea2-c19114476757" Mar 18 13:27:00.604607 master-0 kubenswrapper[28504]: I0318 13:27:00.604575 28504 status_manager.go:851] "Failed to get status for pod" podUID="e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:00.604781 master-0 kubenswrapper[28504]: E0318 13:27:00.604727 28504 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:00.605866 master-0 kubenswrapper[28504]: I0318 13:27:00.605711 28504 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:00.610040 master-0 kubenswrapper[28504]: I0318 13:27:00.610003 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e47f97eb0a0cc5aac7e96e57325228c9/kube-controller-manager/0.log" Mar 18 13:27:00.610227 master-0 kubenswrapper[28504]: I0318 13:27:00.610199 28504 generic.go:334] "Generic (PLEG): container finished" podID="e47f97eb0a0cc5aac7e96e57325228c9" containerID="4f190a1e5cc84fa7af8fb29dad5d8ad4c967b2e4627e9634fba3c046d5f350df" exitCode=1 Mar 18 13:27:00.610380 master-0 kubenswrapper[28504]: I0318 13:27:00.610327 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e47f97eb0a0cc5aac7e96e57325228c9","Type":"ContainerDied","Data":"4f190a1e5cc84fa7af8fb29dad5d8ad4c967b2e4627e9634fba3c046d5f350df"} Mar 18 13:27:00.611409 master-0 kubenswrapper[28504]: I0318 13:27:00.611389 28504 scope.go:117] "RemoveContainer" containerID="4f190a1e5cc84fa7af8fb29dad5d8ad4c967b2e4627e9634fba3c046d5f350df" Mar 18 13:27:00.611568 master-0 kubenswrapper[28504]: I0318 13:27:00.611511 28504 status_manager.go:851] "Failed to get status for pod" podUID="e47f97eb0a0cc5aac7e96e57325228c9" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:00.612144 master-0 kubenswrapper[28504]: I0318 13:27:00.612104 28504 status_manager.go:851] "Failed to get status for pod" podUID="e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:00.613052 master-0 kubenswrapper[28504]: I0318 13:27:00.613013 28504 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:01.100363 master-0 kubenswrapper[28504]: I0318 13:27:01.100261 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:27:01.619642 master-0 kubenswrapper[28504]: I0318 13:27:01.619591 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e47f97eb0a0cc5aac7e96e57325228c9/kube-controller-manager/0.log" Mar 18 13:27:01.620383 master-0 kubenswrapper[28504]: I0318 13:27:01.619906 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e47f97eb0a0cc5aac7e96e57325228c9","Type":"ContainerStarted","Data":"9f7865da22d2864df6473b0ab5931f19c2c9b3c114b55a2d057d37caa85a26d7"} Mar 18 13:27:01.628182 master-0 kubenswrapper[28504]: I0318 13:27:01.628120 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"74f47ebd07648c8b30cd8b413941eeef6e015556094b6ef14ce425d8e0d27b1d"} Mar 18 13:27:01.628182 master-0 kubenswrapper[28504]: I0318 13:27:01.628173 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"37f4935fa1bbdf9e7ded8b297bfa872b3129393558562bf196ef962445e228c0"} Mar 18 13:27:01.628182 master-0 kubenswrapper[28504]: I0318 13:27:01.628185 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"68831dcb8b4edfd12ad2701582e36d85dd3e8c66065371af340900b93af22354"} Mar 18 13:27:01.628182 master-0 kubenswrapper[28504]: I0318 13:27:01.628194 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"1932f3f24bbf09584b111dbb2ff91c74d321eade7d57f75a02dfa0b917d93979"} Mar 18 13:27:02.639189 master-0 kubenswrapper[28504]: I0318 13:27:02.639082 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"6154b8c6d82301db78dd84d5d54eacf61efb7bd5e079bcb20833a35f4e92f40a"} Mar 18 13:27:02.639711 master-0 kubenswrapper[28504]: I0318 13:27:02.639422 28504 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="048bceb5-2b16-4ba6-bea2-c19114476757" Mar 18 13:27:02.639711 master-0 kubenswrapper[28504]: I0318 13:27:02.639457 28504 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="048bceb5-2b16-4ba6-bea2-c19114476757" Mar 18 13:27:03.472267 master-0 kubenswrapper[28504]: I0318 13:27:03.472200 28504 patch_prober.go:28] interesting pod/console-86cfd4f585-tfs7z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 18 13:27:03.472511 master-0 kubenswrapper[28504]: I0318 13:27:03.472286 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86cfd4f585-tfs7z" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 18 13:27:04.771182 master-0 kubenswrapper[28504]: I0318 13:27:04.771106 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:04.771182 master-0 kubenswrapper[28504]: I0318 13:27:04.771179 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:04.777008 master-0 kubenswrapper[28504]: I0318 13:27:04.776954 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:04.873517 master-0 kubenswrapper[28504]: I0318 13:27:04.873442 28504 patch_prober.go:28] interesting pod/console-7486c568bf-jngmz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Mar 18 13:27:04.873742 master-0 kubenswrapper[28504]: I0318 13:27:04.873532 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7486c568bf-jngmz" podUID="e4022ee9-babb-4dc3-a486-ddbab9fa8c16" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Mar 18 13:27:06.617962 master-0 kubenswrapper[28504]: I0318 13:27:06.617860 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:27:06.619898 master-0 kubenswrapper[28504]: I0318 13:27:06.619300 28504 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 13:27:06.619898 master-0 kubenswrapper[28504]: I0318 13:27:06.619354 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 13:27:07.657180 master-0 kubenswrapper[28504]: I0318 13:27:07.657120 28504 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:08.680155 master-0 kubenswrapper[28504]: I0318 13:27:08.680054 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:08.680155 master-0 kubenswrapper[28504]: I0318 13:27:08.680105 28504 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="048bceb5-2b16-4ba6-bea2-c19114476757" Mar 18 13:27:08.680155 master-0 kubenswrapper[28504]: I0318 13:27:08.680143 28504 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="048bceb5-2b16-4ba6-bea2-c19114476757" Mar 18 13:27:08.684485 master-0 kubenswrapper[28504]: I0318 13:27:08.684435 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:08.687139 master-0 kubenswrapper[28504]: I0318 13:27:08.687077 28504 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="274c4bebf95a655851b2cf276fe43ef7" podUID="ad022ffa-648a-4736-b12f-5cc3a1ec56a7" Mar 18 13:27:09.686835 master-0 kubenswrapper[28504]: I0318 13:27:09.686779 28504 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="048bceb5-2b16-4ba6-bea2-c19114476757" Mar 18 13:27:09.686835 master-0 kubenswrapper[28504]: I0318 13:27:09.686815 28504 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="048bceb5-2b16-4ba6-bea2-c19114476757" Mar 18 13:27:10.694360 master-0 kubenswrapper[28504]: I0318 13:27:10.694265 28504 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="048bceb5-2b16-4ba6-bea2-c19114476757" Mar 18 13:27:10.694360 master-0 kubenswrapper[28504]: I0318 13:27:10.694344 28504 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="048bceb5-2b16-4ba6-bea2-c19114476757" Mar 18 13:27:11.100571 master-0 kubenswrapper[28504]: I0318 13:27:11.100496 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:27:12.768853 master-0 kubenswrapper[28504]: I0318 13:27:12.768777 28504 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="274c4bebf95a655851b2cf276fe43ef7" podUID="ad022ffa-648a-4736-b12f-5cc3a1ec56a7" Mar 18 13:27:13.472409 master-0 kubenswrapper[28504]: I0318 13:27:13.472347 28504 patch_prober.go:28] interesting pod/console-86cfd4f585-tfs7z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 18 13:27:13.472659 master-0 kubenswrapper[28504]: I0318 13:27:13.472424 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86cfd4f585-tfs7z" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 18 13:27:14.874091 master-0 kubenswrapper[28504]: I0318 13:27:14.874012 28504 patch_prober.go:28] interesting pod/console-7486c568bf-jngmz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Mar 18 13:27:14.876631 master-0 kubenswrapper[28504]: I0318 13:27:14.874099 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7486c568bf-jngmz" podUID="e4022ee9-babb-4dc3-a486-ddbab9fa8c16" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Mar 18 13:27:16.617848 master-0 kubenswrapper[28504]: I0318 13:27:16.617781 28504 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 13:27:16.619458 master-0 kubenswrapper[28504]: I0318 13:27:16.619415 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 13:27:17.123534 master-0 kubenswrapper[28504]: I0318 13:27:17.123479 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 13:27:17.169434 master-0 kubenswrapper[28504]: I0318 13:27:17.169071 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 13:27:17.894131 master-0 kubenswrapper[28504]: I0318 13:27:17.893995 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 13:27:17.907365 master-0 kubenswrapper[28504]: I0318 13:27:17.907313 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 13:27:17.927782 master-0 kubenswrapper[28504]: I0318 13:27:17.927721 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 13:27:18.222335 master-0 kubenswrapper[28504]: I0318 13:27:18.222281 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 13:27:18.355470 master-0 kubenswrapper[28504]: I0318 13:27:18.355407 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-xzmx4" Mar 18 13:27:18.368805 master-0 kubenswrapper[28504]: I0318 13:27:18.368761 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 13:27:18.443604 master-0 kubenswrapper[28504]: I0318 13:27:18.443541 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 13:27:18.447001 master-0 kubenswrapper[28504]: I0318 13:27:18.446865 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 13:27:18.562248 master-0 kubenswrapper[28504]: I0318 13:27:18.562130 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-2d45m" Mar 18 13:27:18.633192 master-0 kubenswrapper[28504]: I0318 13:27:18.633125 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 13:27:18.810741 master-0 kubenswrapper[28504]: I0318 13:27:18.810672 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 13:27:18.889483 master-0 kubenswrapper[28504]: I0318 13:27:18.889343 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 13:27:19.506251 master-0 kubenswrapper[28504]: I0318 13:27:19.506189 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 13:27:19.663572 master-0 kubenswrapper[28504]: I0318 13:27:19.663521 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 13:27:19.964034 master-0 kubenswrapper[28504]: I0318 13:27:19.963809 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 13:27:20.028136 master-0 kubenswrapper[28504]: I0318 13:27:20.024750 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 13:27:20.084326 master-0 kubenswrapper[28504]: I0318 13:27:20.084288 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 13:27:20.119931 master-0 kubenswrapper[28504]: I0318 13:27:20.119868 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 13:27:20.146182 master-0 kubenswrapper[28504]: I0318 13:27:20.146127 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 13:27:20.297349 master-0 kubenswrapper[28504]: I0318 13:27:20.297219 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 13:27:20.300790 master-0 kubenswrapper[28504]: I0318 13:27:20.300730 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:27:20.316567 master-0 kubenswrapper[28504]: I0318 13:27:20.316522 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 13:27:20.344821 master-0 kubenswrapper[28504]: I0318 13:27:20.344515 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-crbnv" Mar 18 13:27:20.402226 master-0 kubenswrapper[28504]: I0318 13:27:20.402153 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 13:27:20.434541 master-0 kubenswrapper[28504]: I0318 13:27:20.434424 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 13:27:20.607767 master-0 kubenswrapper[28504]: I0318 13:27:20.607646 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-9rpkw" Mar 18 13:27:20.628444 master-0 kubenswrapper[28504]: I0318 13:27:20.628363 28504 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 13:27:20.639910 master-0 kubenswrapper[28504]: I0318 13:27:20.639852 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 13:27:20.674237 master-0 kubenswrapper[28504]: I0318 13:27:20.674182 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 18 13:27:20.713383 master-0 kubenswrapper[28504]: I0318 13:27:20.713318 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 13:27:20.730831 master-0 kubenswrapper[28504]: I0318 13:27:20.726720 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 13:27:20.742922 master-0 kubenswrapper[28504]: I0318 13:27:20.742852 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 13:27:20.749725 master-0 kubenswrapper[28504]: I0318 13:27:20.749656 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-rxnwp" Mar 18 13:27:20.807876 master-0 kubenswrapper[28504]: I0318 13:27:20.807806 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 13:27:20.876614 master-0 kubenswrapper[28504]: I0318 13:27:20.876474 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 13:27:20.880172 master-0 kubenswrapper[28504]: I0318 13:27:20.880130 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 13:27:20.990022 master-0 kubenswrapper[28504]: I0318 13:27:20.989564 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 13:27:21.006309 master-0 kubenswrapper[28504]: I0318 13:27:21.006265 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 13:27:21.029367 master-0 kubenswrapper[28504]: I0318 13:27:21.029307 28504 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 13:27:21.029631 master-0 kubenswrapper[28504]: I0318 13:27:21.029602 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 13:27:21.200093 master-0 kubenswrapper[28504]: I0318 13:27:21.200027 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 13:27:21.207420 master-0 kubenswrapper[28504]: I0318 13:27:21.207360 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:27:21.232126 master-0 kubenswrapper[28504]: I0318 13:27:21.232077 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 13:27:21.276450 master-0 kubenswrapper[28504]: I0318 13:27:21.276400 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 13:27:21.290663 master-0 kubenswrapper[28504]: I0318 13:27:21.290620 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 18 13:27:21.301372 master-0 kubenswrapper[28504]: I0318 13:27:21.301332 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 13:27:21.395547 master-0 kubenswrapper[28504]: I0318 13:27:21.395487 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 13:27:21.459025 master-0 kubenswrapper[28504]: I0318 13:27:21.458874 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 13:27:21.543200 master-0 kubenswrapper[28504]: I0318 13:27:21.543145 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 18 13:27:21.659870 master-0 kubenswrapper[28504]: I0318 13:27:21.659778 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 13:27:21.670782 master-0 kubenswrapper[28504]: I0318 13:27:21.670610 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 13:27:21.736917 master-0 kubenswrapper[28504]: I0318 13:27:21.736709 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 13:27:21.779834 master-0 kubenswrapper[28504]: I0318 13:27:21.779780 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lsr6r" Mar 18 13:27:21.848649 master-0 kubenswrapper[28504]: I0318 13:27:21.848490 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 13:27:21.928343 master-0 kubenswrapper[28504]: I0318 13:27:21.928295 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-579bw" Mar 18 13:27:21.934490 master-0 kubenswrapper[28504]: I0318 13:27:21.934441 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 18 13:27:21.959576 master-0 kubenswrapper[28504]: I0318 13:27:21.959473 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 13:27:22.018325 master-0 kubenswrapper[28504]: I0318 13:27:22.018208 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-rm9sr" Mar 18 13:27:22.097288 master-0 kubenswrapper[28504]: I0318 13:27:22.097234 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 13:27:22.136781 master-0 kubenswrapper[28504]: I0318 13:27:22.136729 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 13:27:22.170698 master-0 kubenswrapper[28504]: I0318 13:27:22.170663 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 13:27:22.172738 master-0 kubenswrapper[28504]: I0318 13:27:22.172692 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-5l4kz" Mar 18 13:27:22.174849 master-0 kubenswrapper[28504]: I0318 13:27:22.174620 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 13:27:22.181015 master-0 kubenswrapper[28504]: I0318 13:27:22.180732 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 13:27:22.330474 master-0 kubenswrapper[28504]: I0318 13:27:22.330346 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 13:27:22.357310 master-0 kubenswrapper[28504]: I0318 13:27:22.357263 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 13:27:22.371350 master-0 kubenswrapper[28504]: I0318 13:27:22.371310 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 18 13:27:22.380928 master-0 kubenswrapper[28504]: I0318 13:27:22.380879 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 13:27:22.522960 master-0 kubenswrapper[28504]: I0318 13:27:22.522872 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 13:27:22.561592 master-0 kubenswrapper[28504]: I0318 13:27:22.561544 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 13:27:22.583013 master-0 kubenswrapper[28504]: I0318 13:27:22.582872 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 13:27:22.639578 master-0 kubenswrapper[28504]: I0318 13:27:22.639236 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 18 13:27:22.644532 master-0 kubenswrapper[28504]: I0318 13:27:22.644477 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 13:27:22.682373 master-0 kubenswrapper[28504]: I0318 13:27:22.682298 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 13:27:22.682916 master-0 kubenswrapper[28504]: I0318 13:27:22.682471 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:27:22.729331 master-0 kubenswrapper[28504]: I0318 13:27:22.729266 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2dpn1smcfbjnb" Mar 18 13:27:22.790488 master-0 kubenswrapper[28504]: I0318 13:27:22.790429 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 13:27:22.804828 master-0 kubenswrapper[28504]: I0318 13:27:22.804766 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 18 13:27:22.872426 master-0 kubenswrapper[28504]: I0318 13:27:22.872252 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 13:27:22.927525 master-0 kubenswrapper[28504]: I0318 13:27:22.927488 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 13:27:23.032526 master-0 kubenswrapper[28504]: I0318 13:27:23.032473 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 13:27:23.041730 master-0 kubenswrapper[28504]: I0318 13:27:23.041666 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 13:27:23.154720 master-0 kubenswrapper[28504]: I0318 13:27:23.154574 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 13:27:23.214363 master-0 kubenswrapper[28504]: I0318 13:27:23.214298 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 13:27:23.223021 master-0 kubenswrapper[28504]: I0318 13:27:23.222988 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 13:27:23.276191 master-0 kubenswrapper[28504]: I0318 13:27:23.276104 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 13:27:23.294675 master-0 kubenswrapper[28504]: I0318 13:27:23.294627 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 13:27:23.309219 master-0 kubenswrapper[28504]: I0318 13:27:23.309156 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-x4c9n" Mar 18 13:27:23.338274 master-0 kubenswrapper[28504]: I0318 13:27:23.338109 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 13:27:23.453029 master-0 kubenswrapper[28504]: I0318 13:27:23.452981 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 18 13:27:23.472474 master-0 kubenswrapper[28504]: I0318 13:27:23.472418 28504 patch_prober.go:28] interesting pod/console-86cfd4f585-tfs7z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 18 13:27:23.472687 master-0 kubenswrapper[28504]: I0318 13:27:23.472488 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-86cfd4f585-tfs7z" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 18 13:27:23.581641 master-0 kubenswrapper[28504]: I0318 13:27:23.581571 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 13:27:23.587642 master-0 kubenswrapper[28504]: I0318 13:27:23.587576 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-lqqf9" Mar 18 13:27:23.669490 master-0 kubenswrapper[28504]: I0318 13:27:23.669407 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 13:27:23.697297 master-0 kubenswrapper[28504]: I0318 13:27:23.697235 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:27:23.714264 master-0 kubenswrapper[28504]: I0318 13:27:23.714140 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 13:27:23.720115 master-0 kubenswrapper[28504]: I0318 13:27:23.720077 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 13:27:23.776956 master-0 kubenswrapper[28504]: I0318 13:27:23.776870 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-qmkzj" Mar 18 13:27:23.811161 master-0 kubenswrapper[28504]: I0318 13:27:23.811102 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 13:27:23.815930 master-0 kubenswrapper[28504]: I0318 13:27:23.815754 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 13:27:23.836206 master-0 kubenswrapper[28504]: I0318 13:27:23.836142 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 13:27:23.895416 master-0 kubenswrapper[28504]: I0318 13:27:23.895370 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-8rcx8" Mar 18 13:27:23.966495 master-0 kubenswrapper[28504]: I0318 13:27:23.966386 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 13:27:23.995915 master-0 kubenswrapper[28504]: I0318 13:27:23.995879 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 13:27:23.996223 master-0 kubenswrapper[28504]: I0318 13:27:23.996082 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 13:27:24.001957 master-0 kubenswrapper[28504]: I0318 13:27:24.001903 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 13:27:24.074797 master-0 kubenswrapper[28504]: I0318 13:27:24.074731 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 13:27:24.142394 master-0 kubenswrapper[28504]: I0318 13:27:24.142339 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 13:27:24.150365 master-0 kubenswrapper[28504]: I0318 13:27:24.150301 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 13:27:24.232490 master-0 kubenswrapper[28504]: I0318 13:27:24.232341 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 13:27:24.363551 master-0 kubenswrapper[28504]: I0318 13:27:24.363487 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 13:27:24.397065 master-0 kubenswrapper[28504]: I0318 13:27:24.396989 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 13:27:24.424415 master-0 kubenswrapper[28504]: I0318 13:27:24.424354 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 13:27:24.516451 master-0 kubenswrapper[28504]: I0318 13:27:24.516330 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:27:24.529037 master-0 kubenswrapper[28504]: I0318 13:27:24.528984 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 13:27:24.597770 master-0 kubenswrapper[28504]: I0318 13:27:24.597673 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 13:27:24.614482 master-0 kubenswrapper[28504]: I0318 13:27:24.614419 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 13:27:24.659712 master-0 kubenswrapper[28504]: I0318 13:27:24.659640 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 13:27:24.723450 master-0 kubenswrapper[28504]: I0318 13:27:24.723409 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 18 13:27:24.858658 master-0 kubenswrapper[28504]: I0318 13:27:24.858543 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 13:27:24.873514 master-0 kubenswrapper[28504]: I0318 13:27:24.873456 28504 patch_prober.go:28] interesting pod/console-7486c568bf-jngmz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Mar 18 13:27:24.873735 master-0 kubenswrapper[28504]: I0318 13:27:24.873529 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7486c568bf-jngmz" podUID="e4022ee9-babb-4dc3-a486-ddbab9fa8c16" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Mar 18 13:27:25.034470 master-0 kubenswrapper[28504]: I0318 13:27:25.034413 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 13:27:25.067034 master-0 kubenswrapper[28504]: I0318 13:27:25.066985 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 13:27:25.084272 master-0 kubenswrapper[28504]: I0318 13:27:25.084189 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 13:27:25.118450 master-0 kubenswrapper[28504]: I0318 13:27:25.118313 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 13:27:25.136382 master-0 kubenswrapper[28504]: I0318 13:27:25.136318 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 13:27:25.193379 master-0 kubenswrapper[28504]: I0318 13:27:25.193323 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 13:27:25.199329 master-0 kubenswrapper[28504]: I0318 13:27:25.199275 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 13:27:25.259389 master-0 kubenswrapper[28504]: I0318 13:27:25.259308 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 13:27:25.281502 master-0 kubenswrapper[28504]: I0318 13:27:25.281440 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 13:27:25.337367 master-0 kubenswrapper[28504]: I0318 13:27:25.337284 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 18 13:27:25.381877 master-0 kubenswrapper[28504]: I0318 13:27:25.381753 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 13:27:25.428598 master-0 kubenswrapper[28504]: I0318 13:27:25.428502 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 13:27:25.435840 master-0 kubenswrapper[28504]: I0318 13:27:25.435560 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-ntbvj" Mar 18 13:27:25.501240 master-0 kubenswrapper[28504]: I0318 13:27:25.501153 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 13:27:25.576142 master-0 kubenswrapper[28504]: I0318 13:27:25.576100 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 13:27:25.746914 master-0 kubenswrapper[28504]: I0318 13:27:25.746847 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 13:27:25.747707 master-0 kubenswrapper[28504]: I0318 13:27:25.747671 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 13:27:25.797416 master-0 kubenswrapper[28504]: I0318 13:27:25.797342 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 13:27:25.847864 master-0 kubenswrapper[28504]: I0318 13:27:25.847799 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 13:27:25.865711 master-0 kubenswrapper[28504]: I0318 13:27:25.865617 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 13:27:25.889386 master-0 kubenswrapper[28504]: I0318 13:27:25.889337 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 13:27:25.943802 master-0 kubenswrapper[28504]: I0318 13:27:25.943719 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:27:25.987552 master-0 kubenswrapper[28504]: I0318 13:27:25.987489 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 13:27:26.001760 master-0 kubenswrapper[28504]: I0318 13:27:26.001534 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 13:27:26.005046 master-0 kubenswrapper[28504]: I0318 13:27:26.004992 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 13:27:26.030370 master-0 kubenswrapper[28504]: I0318 13:27:26.030284 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 13:27:26.056905 master-0 kubenswrapper[28504]: I0318 13:27:26.056816 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 13:27:26.275791 master-0 kubenswrapper[28504]: I0318 13:27:26.275691 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 13:27:26.284470 master-0 kubenswrapper[28504]: I0318 13:27:26.284409 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:27:26.293056 master-0 kubenswrapper[28504]: I0318 13:27:26.292984 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 13:27:26.297732 master-0 kubenswrapper[28504]: I0318 13:27:26.297663 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 13:27:26.362810 master-0 kubenswrapper[28504]: I0318 13:27:26.362730 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 13:27:26.390864 master-0 kubenswrapper[28504]: I0318 13:27:26.390774 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 13:27:26.618170 master-0 kubenswrapper[28504]: I0318 13:27:26.618033 28504 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 13:27:26.618170 master-0 kubenswrapper[28504]: I0318 13:27:26.618109 28504 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 13:27:26.618170 master-0 kubenswrapper[28504]: I0318 13:27:26.618164 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:27:26.618888 master-0 kubenswrapper[28504]: I0318 13:27:26.618802 28504 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"9f7865da22d2864df6473b0ab5931f19c2c9b3c114b55a2d057d37caa85a26d7"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 18 13:27:26.619134 master-0 kubenswrapper[28504]: I0318 13:27:26.618969 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager" containerID="cri-o://9f7865da22d2864df6473b0ab5931f19c2c9b3c114b55a2d057d37caa85a26d7" gracePeriod=30 Mar 18 13:27:26.627607 master-0 kubenswrapper[28504]: I0318 13:27:26.627548 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 18 13:27:26.634449 master-0 kubenswrapper[28504]: I0318 13:27:26.634392 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 13:27:26.635357 master-0 kubenswrapper[28504]: I0318 13:27:26.635322 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-wl4c6" Mar 18 13:27:26.659722 master-0 kubenswrapper[28504]: I0318 13:27:26.659670 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 18 13:27:26.671889 master-0 kubenswrapper[28504]: I0318 13:27:26.671846 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 13:27:26.759241 master-0 kubenswrapper[28504]: I0318 13:27:26.759155 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 13:27:26.814693 master-0 kubenswrapper[28504]: I0318 13:27:26.814634 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-spdqf" Mar 18 13:27:26.815753 master-0 kubenswrapper[28504]: I0318 13:27:26.815674 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 13:27:26.817533 master-0 kubenswrapper[28504]: I0318 13:27:26.817471 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:27:26.859559 master-0 kubenswrapper[28504]: I0318 13:27:26.859484 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 13:27:26.933986 master-0 kubenswrapper[28504]: I0318 13:27:26.933777 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:27:26.978662 master-0 kubenswrapper[28504]: I0318 13:27:26.978602 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-stj86" Mar 18 13:27:26.984792 master-0 kubenswrapper[28504]: I0318 13:27:26.984752 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 13:27:27.006825 master-0 kubenswrapper[28504]: I0318 13:27:27.006782 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 13:27:27.009093 master-0 kubenswrapper[28504]: I0318 13:27:27.009061 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 13:27:27.046551 master-0 kubenswrapper[28504]: I0318 13:27:27.046491 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 13:27:27.126228 master-0 kubenswrapper[28504]: I0318 13:27:27.126187 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 13:27:27.202014 master-0 kubenswrapper[28504]: I0318 13:27:27.201961 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 13:27:27.239528 master-0 kubenswrapper[28504]: I0318 13:27:27.239460 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 13:27:27.241915 master-0 kubenswrapper[28504]: I0318 13:27:27.241857 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:27:27.278326 master-0 kubenswrapper[28504]: I0318 13:27:27.278259 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 13:27:27.385443 master-0 kubenswrapper[28504]: I0318 13:27:27.385368 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 13:27:27.463215 master-0 kubenswrapper[28504]: I0318 13:27:27.463092 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 13:27:27.511552 master-0 kubenswrapper[28504]: I0318 13:27:27.511501 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 13:27:27.554261 master-0 kubenswrapper[28504]: I0318 13:27:27.554160 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-j868d" Mar 18 13:27:27.699352 master-0 kubenswrapper[28504]: I0318 13:27:27.699307 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-8zrbw" Mar 18 13:27:27.716672 master-0 kubenswrapper[28504]: I0318 13:27:27.716538 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 13:27:27.730930 master-0 kubenswrapper[28504]: I0318 13:27:27.730865 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 13:27:27.732626 master-0 kubenswrapper[28504]: I0318 13:27:27.732585 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 13:27:27.768814 master-0 kubenswrapper[28504]: I0318 13:27:27.768746 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 13:27:27.778183 master-0 kubenswrapper[28504]: I0318 13:27:27.778133 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 13:27:27.840366 master-0 kubenswrapper[28504]: I0318 13:27:27.840276 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 13:27:27.876219 master-0 kubenswrapper[28504]: I0318 13:27:27.876124 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 13:27:27.904284 master-0 kubenswrapper[28504]: I0318 13:27:27.904202 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 13:27:27.947446 master-0 kubenswrapper[28504]: I0318 13:27:27.947379 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 13:27:27.983838 master-0 kubenswrapper[28504]: I0318 13:27:27.983706 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-68f42" Mar 18 13:27:28.041176 master-0 kubenswrapper[28504]: I0318 13:27:28.041090 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 13:27:28.067969 master-0 kubenswrapper[28504]: I0318 13:27:28.067903 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 13:27:28.086468 master-0 kubenswrapper[28504]: I0318 13:27:28.086413 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 13:27:28.105888 master-0 kubenswrapper[28504]: I0318 13:27:28.105825 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 13:27:28.143639 master-0 kubenswrapper[28504]: I0318 13:27:28.143595 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 13:27:28.150236 master-0 kubenswrapper[28504]: I0318 13:27:28.150212 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 13:27:28.168916 master-0 kubenswrapper[28504]: I0318 13:27:28.168852 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 13:27:28.169275 master-0 kubenswrapper[28504]: I0318 13:27:28.169228 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 13:27:28.218222 master-0 kubenswrapper[28504]: I0318 13:27:28.218161 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 13:27:28.236315 master-0 kubenswrapper[28504]: I0318 13:27:28.236186 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 13:27:28.276352 master-0 kubenswrapper[28504]: I0318 13:27:28.276180 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 13:27:28.411194 master-0 kubenswrapper[28504]: I0318 13:27:28.411146 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 13:27:28.448854 master-0 kubenswrapper[28504]: I0318 13:27:28.448809 28504 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 13:27:28.451419 master-0 kubenswrapper[28504]: I0318 13:27:28.451239 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=43.451220785 podStartE2EDuration="43.451220785s" podCreationTimestamp="2026-03-18 13:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:27:07.192337451 +0000 UTC m=+204.687143236" watchObservedRunningTime="2026-03-18 13:27:28.451220785 +0000 UTC m=+225.946026560" Mar 18 13:27:28.455001 master-0 kubenswrapper[28504]: I0318 13:27:28.454963 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:27:28.455090 master-0 kubenswrapper[28504]: I0318 13:27:28.455016 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:27:28.462984 master-0 kubenswrapper[28504]: I0318 13:27:28.462166 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:28.479679 master-0 kubenswrapper[28504]: I0318 13:27:28.479295 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=21.47926863 podStartE2EDuration="21.47926863s" podCreationTimestamp="2026-03-18 13:27:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:27:28.477736506 +0000 UTC m=+225.972542281" watchObservedRunningTime="2026-03-18 13:27:28.47926863 +0000 UTC m=+225.974074405" Mar 18 13:27:28.481067 master-0 kubenswrapper[28504]: I0318 13:27:28.481029 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 13:27:28.512560 master-0 kubenswrapper[28504]: I0318 13:27:28.512430 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 13:27:28.534077 master-0 kubenswrapper[28504]: I0318 13:27:28.534017 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 13:27:28.563961 master-0 kubenswrapper[28504]: I0318 13:27:28.563907 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 13:27:28.599415 master-0 kubenswrapper[28504]: I0318 13:27:28.599358 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 13:27:28.706880 master-0 kubenswrapper[28504]: I0318 13:27:28.706809 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 13:27:28.834493 master-0 kubenswrapper[28504]: I0318 13:27:28.834356 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 13:27:28.852131 master-0 kubenswrapper[28504]: I0318 13:27:28.852072 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 13:27:28.978174 master-0 kubenswrapper[28504]: I0318 13:27:28.978113 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 13:27:29.005454 master-0 kubenswrapper[28504]: I0318 13:27:29.005393 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 13:27:29.078241 master-0 kubenswrapper[28504]: I0318 13:27:29.078188 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-29gmv" Mar 18 13:27:29.083859 master-0 kubenswrapper[28504]: I0318 13:27:29.083801 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-lktsx" Mar 18 13:27:29.098990 master-0 kubenswrapper[28504]: I0318 13:27:29.098866 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 13:27:29.138046 master-0 kubenswrapper[28504]: I0318 13:27:29.137920 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 13:27:29.195060 master-0 kubenswrapper[28504]: I0318 13:27:29.194987 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 13:27:29.316873 master-0 kubenswrapper[28504]: I0318 13:27:29.316703 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 13:27:29.417620 master-0 kubenswrapper[28504]: I0318 13:27:29.417487 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 13:27:29.461234 master-0 kubenswrapper[28504]: I0318 13:27:29.457729 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-2bvqk" Mar 18 13:27:29.547631 master-0 kubenswrapper[28504]: I0318 13:27:29.547573 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 13:27:29.560926 master-0 kubenswrapper[28504]: I0318 13:27:29.560836 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 13:27:29.607328 master-0 kubenswrapper[28504]: I0318 13:27:29.607243 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 13:27:29.609560 master-0 kubenswrapper[28504]: I0318 13:27:29.609507 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-kw85n" Mar 18 13:27:29.767216 master-0 kubenswrapper[28504]: I0318 13:27:29.767123 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 13:27:29.781667 master-0 kubenswrapper[28504]: I0318 13:27:29.781605 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 13:27:29.793533 master-0 kubenswrapper[28504]: I0318 13:27:29.793490 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 13:27:29.858987 master-0 kubenswrapper[28504]: I0318 13:27:29.855609 28504 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:27:29.858987 master-0 kubenswrapper[28504]: I0318 13:27:29.855894 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" containerID="cri-o://9a74794e79a5d7f1d9afced9179d8e5d2989ffe0e330d539282156c93a3d6a96" gracePeriod=5 Mar 18 13:27:29.918845 master-0 kubenswrapper[28504]: I0318 13:27:29.918787 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 18 13:27:29.929130 master-0 kubenswrapper[28504]: I0318 13:27:29.929064 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 13:27:29.940443 master-0 kubenswrapper[28504]: I0318 13:27:29.940389 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 13:27:29.974551 master-0 kubenswrapper[28504]: I0318 13:27:29.974497 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 13:27:30.045153 master-0 kubenswrapper[28504]: I0318 13:27:30.045008 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 13:27:30.077153 master-0 kubenswrapper[28504]: I0318 13:27:30.077074 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:27:30.118226 master-0 kubenswrapper[28504]: I0318 13:27:30.118140 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 18 13:27:30.133407 master-0 kubenswrapper[28504]: I0318 13:27:30.133328 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 13:27:30.291017 master-0 kubenswrapper[28504]: I0318 13:27:30.290920 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 13:27:30.316734 master-0 kubenswrapper[28504]: I0318 13:27:30.316593 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 13:27:30.360367 master-0 kubenswrapper[28504]: I0318 13:27:30.360293 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 13:27:30.488416 master-0 kubenswrapper[28504]: I0318 13:27:30.488345 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 13:27:30.688793 master-0 kubenswrapper[28504]: I0318 13:27:30.688707 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 13:27:30.767202 master-0 kubenswrapper[28504]: I0318 13:27:30.767102 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:27:30.780046 master-0 kubenswrapper[28504]: I0318 13:27:30.779917 28504 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 13:27:30.836368 master-0 kubenswrapper[28504]: I0318 13:27:30.836281 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-nq9c8" Mar 18 13:27:30.888460 master-0 kubenswrapper[28504]: I0318 13:27:30.888402 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 13:27:30.983930 master-0 kubenswrapper[28504]: I0318 13:27:30.983812 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 13:27:31.027598 master-0 kubenswrapper[28504]: I0318 13:27:31.027530 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 13:27:31.227328 master-0 kubenswrapper[28504]: I0318 13:27:31.227284 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 13:27:31.260757 master-0 kubenswrapper[28504]: I0318 13:27:31.260605 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 13:27:31.293756 master-0 kubenswrapper[28504]: I0318 13:27:31.293679 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 13:27:31.379827 master-0 kubenswrapper[28504]: I0318 13:27:31.379758 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 13:27:31.401659 master-0 kubenswrapper[28504]: I0318 13:27:31.401592 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 13:27:31.436957 master-0 kubenswrapper[28504]: I0318 13:27:31.436894 28504 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 13:27:31.438990 master-0 kubenswrapper[28504]: I0318 13:27:31.438959 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 13:27:31.507859 master-0 kubenswrapper[28504]: I0318 13:27:31.507806 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 13:27:31.532913 master-0 kubenswrapper[28504]: I0318 13:27:31.532795 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-wbjcs" Mar 18 13:27:31.591544 master-0 kubenswrapper[28504]: I0318 13:27:31.591492 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-jlgxc" Mar 18 13:27:31.683261 master-0 kubenswrapper[28504]: I0318 13:27:31.683198 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-dttzl" Mar 18 13:27:31.712880 master-0 kubenswrapper[28504]: I0318 13:27:31.712812 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 13:27:31.721271 master-0 kubenswrapper[28504]: I0318 13:27:31.721161 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 13:27:31.837127 master-0 kubenswrapper[28504]: I0318 13:27:31.836996 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 13:27:31.882196 master-0 kubenswrapper[28504]: I0318 13:27:31.882117 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 13:27:31.960660 master-0 kubenswrapper[28504]: I0318 13:27:31.960552 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 13:27:31.961709 master-0 kubenswrapper[28504]: I0318 13:27:31.961655 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 13:27:32.081391 master-0 kubenswrapper[28504]: I0318 13:27:32.081328 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 13:27:32.116456 master-0 kubenswrapper[28504]: I0318 13:27:32.116311 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 13:27:32.157671 master-0 kubenswrapper[28504]: I0318 13:27:32.157599 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 13:27:32.187784 master-0 kubenswrapper[28504]: I0318 13:27:32.187722 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 18 13:27:32.224741 master-0 kubenswrapper[28504]: I0318 13:27:32.224674 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 13:27:32.262491 master-0 kubenswrapper[28504]: I0318 13:27:32.262427 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 13:27:32.264707 master-0 kubenswrapper[28504]: I0318 13:27:32.264674 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-6l8l5" Mar 18 13:27:32.309026 master-0 kubenswrapper[28504]: I0318 13:27:32.308952 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 18 13:27:32.399958 master-0 kubenswrapper[28504]: I0318 13:27:32.399810 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 13:27:32.445367 master-0 kubenswrapper[28504]: I0318 13:27:32.445306 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:27:32.458189 master-0 kubenswrapper[28504]: I0318 13:27:32.458131 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 13:27:32.515641 master-0 kubenswrapper[28504]: I0318 13:27:32.515578 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 13:27:32.686734 master-0 kubenswrapper[28504]: I0318 13:27:32.686584 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 18 13:27:32.762571 master-0 kubenswrapper[28504]: I0318 13:27:32.762529 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 13:27:32.842242 master-0 kubenswrapper[28504]: I0318 13:27:32.842165 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 13:27:32.842557 master-0 kubenswrapper[28504]: I0318 13:27:32.842502 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-2vnp2" Mar 18 13:27:33.138228 master-0 kubenswrapper[28504]: I0318 13:27:33.138183 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-xvxxf" Mar 18 13:27:33.454126 master-0 kubenswrapper[28504]: I0318 13:27:33.454048 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 13:27:33.476461 master-0 kubenswrapper[28504]: I0318 13:27:33.476399 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:27:33.479963 master-0 kubenswrapper[28504]: I0318 13:27:33.479904 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:27:33.938424 master-0 kubenswrapper[28504]: I0318 13:27:33.938345 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 13:27:34.338522 master-0 kubenswrapper[28504]: I0318 13:27:34.338370 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 13:27:34.878472 master-0 kubenswrapper[28504]: I0318 13:27:34.878410 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:27:34.883748 master-0 kubenswrapper[28504]: I0318 13:27:34.883688 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:27:35.424394 master-0 kubenswrapper[28504]: I0318 13:27:35.424326 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebbfbf2b56df0323ba118d68bfdad8b9/startup-monitor/0.log" Mar 18 13:27:35.424950 master-0 kubenswrapper[28504]: I0318 13:27:35.424418 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:35.494552 master-0 kubenswrapper[28504]: I0318 13:27:35.494434 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 13:27:35.494781 master-0 kubenswrapper[28504]: I0318 13:27:35.494606 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 13:27:35.494781 master-0 kubenswrapper[28504]: I0318 13:27:35.494637 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock" (OuterVolumeSpecName: "var-lock") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:27:35.494781 master-0 kubenswrapper[28504]: I0318 13:27:35.494706 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 13:27:35.494781 master-0 kubenswrapper[28504]: I0318 13:27:35.494750 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 13:27:35.494998 master-0 kubenswrapper[28504]: I0318 13:27:35.494799 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests" (OuterVolumeSpecName: "manifests") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:27:35.494998 master-0 kubenswrapper[28504]: I0318 13:27:35.494833 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 13:27:35.494998 master-0 kubenswrapper[28504]: I0318 13:27:35.494909 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:27:35.495131 master-0 kubenswrapper[28504]: I0318 13:27:35.495007 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log" (OuterVolumeSpecName: "var-log") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:27:35.495237 master-0 kubenswrapper[28504]: I0318 13:27:35.495206 28504 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") on node \"master-0\" DevicePath \"\"" Mar 18 13:27:35.495270 master-0 kubenswrapper[28504]: I0318 13:27:35.495239 28504 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:27:35.495270 master-0 kubenswrapper[28504]: I0318 13:27:35.495257 28504 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") on node \"master-0\" DevicePath \"\"" Mar 18 13:27:35.495270 master-0 kubenswrapper[28504]: I0318 13:27:35.495268 28504 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:27:35.499826 master-0 kubenswrapper[28504]: I0318 13:27:35.499752 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:27:35.597075 master-0 kubenswrapper[28504]: I0318 13:27:35.596984 28504 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:27:35.875374 master-0 kubenswrapper[28504]: I0318 13:27:35.875305 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebbfbf2b56df0323ba118d68bfdad8b9/startup-monitor/0.log" Mar 18 13:27:35.875575 master-0 kubenswrapper[28504]: I0318 13:27:35.875374 28504 generic.go:334] "Generic (PLEG): container finished" podID="ebbfbf2b56df0323ba118d68bfdad8b9" containerID="9a74794e79a5d7f1d9afced9179d8e5d2989ffe0e330d539282156c93a3d6a96" exitCode=137 Mar 18 13:27:35.875575 master-0 kubenswrapper[28504]: I0318 13:27:35.875481 28504 scope.go:117] "RemoveContainer" containerID="9a74794e79a5d7f1d9afced9179d8e5d2989ffe0e330d539282156c93a3d6a96" Mar 18 13:27:35.875575 master-0 kubenswrapper[28504]: I0318 13:27:35.875509 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:35.892459 master-0 kubenswrapper[28504]: I0318 13:27:35.892400 28504 scope.go:117] "RemoveContainer" containerID="9a74794e79a5d7f1d9afced9179d8e5d2989ffe0e330d539282156c93a3d6a96" Mar 18 13:27:35.894076 master-0 kubenswrapper[28504]: E0318 13:27:35.893188 28504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a74794e79a5d7f1d9afced9179d8e5d2989ffe0e330d539282156c93a3d6a96\": container with ID starting with 9a74794e79a5d7f1d9afced9179d8e5d2989ffe0e330d539282156c93a3d6a96 not found: ID does not exist" containerID="9a74794e79a5d7f1d9afced9179d8e5d2989ffe0e330d539282156c93a3d6a96" Mar 18 13:27:35.894076 master-0 kubenswrapper[28504]: I0318 13:27:35.893243 28504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a74794e79a5d7f1d9afced9179d8e5d2989ffe0e330d539282156c93a3d6a96"} err="failed to get container status \"9a74794e79a5d7f1d9afced9179d8e5d2989ffe0e330d539282156c93a3d6a96\": rpc error: code = NotFound desc = could not find container \"9a74794e79a5d7f1d9afced9179d8e5d2989ffe0e330d539282156c93a3d6a96\": container with ID starting with 9a74794e79a5d7f1d9afced9179d8e5d2989ffe0e330d539282156c93a3d6a96 not found: ID does not exist" Mar 18 13:27:36.759230 master-0 kubenswrapper[28504]: I0318 13:27:36.759119 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" path="/var/lib/kubelet/pods/ebbfbf2b56df0323ba118d68bfdad8b9/volumes" Mar 18 13:27:36.759714 master-0 kubenswrapper[28504]: I0318 13:27:36.759448 28504 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 18 13:27:36.775305 master-0 kubenswrapper[28504]: I0318 13:27:36.775145 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:27:36.775305 master-0 kubenswrapper[28504]: I0318 13:27:36.775301 28504 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="78a14775-c5a9-4f49-9104-b779a3dc984a" Mar 18 13:27:36.780488 master-0 kubenswrapper[28504]: I0318 13:27:36.780434 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:27:36.780488 master-0 kubenswrapper[28504]: I0318 13:27:36.780476 28504 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="78a14775-c5a9-4f49-9104-b779a3dc984a" Mar 18 13:27:37.830688 master-0 kubenswrapper[28504]: E0318 13:27:37.830554 28504 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2dpn1smcfbjnb: secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:27:37.831513 master-0 kubenswrapper[28504]: E0318 13:27:37.830715 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle podName:b79758b7-9129-496c-abec-80d455648454 nodeName:}" failed. No retries permitted until 2026-03-18 13:27:38.330690065 +0000 UTC m=+235.825495910 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle") pod "metrics-server-648866dd9c-ztkrd" (UID: "b79758b7-9129-496c-abec-80d455648454") : secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:27:38.338227 master-0 kubenswrapper[28504]: E0318 13:27:38.338156 28504 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2dpn1smcfbjnb: secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:27:38.338501 master-0 kubenswrapper[28504]: E0318 13:27:38.338282 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle podName:b79758b7-9129-496c-abec-80d455648454 nodeName:}" failed. No retries permitted until 2026-03-18 13:27:39.338249358 +0000 UTC m=+236.833055173 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle") pod "metrics-server-648866dd9c-ztkrd" (UID: "b79758b7-9129-496c-abec-80d455648454") : secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:27:39.351891 master-0 kubenswrapper[28504]: E0318 13:27:39.351823 28504 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2dpn1smcfbjnb: secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:27:39.351891 master-0 kubenswrapper[28504]: E0318 13:27:39.351898 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle podName:b79758b7-9129-496c-abec-80d455648454 nodeName:}" failed. No retries permitted until 2026-03-18 13:27:41.351884261 +0000 UTC m=+238.846690036 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle") pod "metrics-server-648866dd9c-ztkrd" (UID: "b79758b7-9129-496c-abec-80d455648454") : secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:27:41.380041 master-0 kubenswrapper[28504]: E0318 13:27:41.379900 28504 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2dpn1smcfbjnb: secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:27:41.380041 master-0 kubenswrapper[28504]: E0318 13:27:41.380000 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle podName:b79758b7-9129-496c-abec-80d455648454 nodeName:}" failed. No retries permitted until 2026-03-18 13:27:45.379985123 +0000 UTC m=+242.874790898 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle") pod "metrics-server-648866dd9c-ztkrd" (UID: "b79758b7-9129-496c-abec-80d455648454") : secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:27:45.440152 master-0 kubenswrapper[28504]: E0318 13:27:45.440022 28504 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2dpn1smcfbjnb: secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:27:45.440152 master-0 kubenswrapper[28504]: E0318 13:27:45.440158 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle podName:b79758b7-9129-496c-abec-80d455648454 nodeName:}" failed. No retries permitted until 2026-03-18 13:27:53.440136629 +0000 UTC m=+250.934942404 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle") pod "metrics-server-648866dd9c-ztkrd" (UID: "b79758b7-9129-496c-abec-80d455648454") : secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:27:52.802167 master-0 kubenswrapper[28504]: I0318 13:27:52.802108 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 13:27:53.450899 master-0 kubenswrapper[28504]: E0318 13:27:53.450848 28504 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2dpn1smcfbjnb: secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:27:53.451235 master-0 kubenswrapper[28504]: E0318 13:27:53.451221 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle podName:b79758b7-9129-496c-abec-80d455648454 nodeName:}" failed. No retries permitted until 2026-03-18 13:28:09.451204951 +0000 UTC m=+266.946010726 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle") pod "metrics-server-648866dd9c-ztkrd" (UID: "b79758b7-9129-496c-abec-80d455648454") : secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:27:57.010345 master-0 kubenswrapper[28504]: I0318 13:27:57.010267 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e47f97eb0a0cc5aac7e96e57325228c9/kube-controller-manager/1.log" Mar 18 13:27:57.011623 master-0 kubenswrapper[28504]: I0318 13:27:57.011597 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e47f97eb0a0cc5aac7e96e57325228c9/kube-controller-manager/0.log" Mar 18 13:27:57.011792 master-0 kubenswrapper[28504]: I0318 13:27:57.011642 28504 generic.go:334] "Generic (PLEG): container finished" podID="e47f97eb0a0cc5aac7e96e57325228c9" containerID="9f7865da22d2864df6473b0ab5931f19c2c9b3c114b55a2d057d37caa85a26d7" exitCode=137 Mar 18 13:27:57.011792 master-0 kubenswrapper[28504]: I0318 13:27:57.011681 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e47f97eb0a0cc5aac7e96e57325228c9","Type":"ContainerDied","Data":"9f7865da22d2864df6473b0ab5931f19c2c9b3c114b55a2d057d37caa85a26d7"} Mar 18 13:27:57.011792 master-0 kubenswrapper[28504]: I0318 13:27:57.011724 28504 scope.go:117] "RemoveContainer" containerID="4f190a1e5cc84fa7af8fb29dad5d8ad4c967b2e4627e9634fba3c046d5f350df" Mar 18 13:27:58.021244 master-0 kubenswrapper[28504]: I0318 13:27:58.021114 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e47f97eb0a0cc5aac7e96e57325228c9/kube-controller-manager/1.log" Mar 18 13:27:58.022515 master-0 kubenswrapper[28504]: I0318 13:27:58.022325 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e47f97eb0a0cc5aac7e96e57325228c9","Type":"ContainerStarted","Data":"08b274aeaf9abbd5f8e5365d511a8523a672bf472c4f314741ea06a6ce223aa8"} Mar 18 13:28:01.100253 master-0 kubenswrapper[28504]: I0318 13:28:01.100204 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:28:06.618095 master-0 kubenswrapper[28504]: I0318 13:28:06.618015 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:28:06.622382 master-0 kubenswrapper[28504]: I0318 13:28:06.622313 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:28:07.176117 master-0 kubenswrapper[28504]: I0318 13:28:07.088971 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:28:07.688700 master-0 kubenswrapper[28504]: I0318 13:28:07.688640 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 13:28:09.512770 master-0 kubenswrapper[28504]: E0318 13:28:09.512585 28504 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2dpn1smcfbjnb: secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:28:09.512770 master-0 kubenswrapper[28504]: E0318 13:28:09.512721 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle podName:b79758b7-9129-496c-abec-80d455648454 nodeName:}" failed. No retries permitted until 2026-03-18 13:28:41.512696928 +0000 UTC m=+299.007502703 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle") pod "metrics-server-648866dd9c-ztkrd" (UID: "b79758b7-9129-496c-abec-80d455648454") : secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:28:13.669165 master-0 kubenswrapper[28504]: I0318 13:28:13.669082 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-86cfd4f585-tfs7z"] Mar 18 13:28:13.692237 master-0 kubenswrapper[28504]: I0318 13:28:13.692136 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-5688f96659-j2jrm"] Mar 18 13:28:13.692624 master-0 kubenswrapper[28504]: E0318 13:28:13.692591 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef" containerName="installer" Mar 18 13:28:13.692624 master-0 kubenswrapper[28504]: I0318 13:28:13.692617 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef" containerName="installer" Mar 18 13:28:13.692709 master-0 kubenswrapper[28504]: E0318 13:28:13.692655 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 18 13:28:13.692709 master-0 kubenswrapper[28504]: I0318 13:28:13.692667 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 18 13:28:13.692885 master-0 kubenswrapper[28504]: I0318 13:28:13.692855 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef" containerName="installer" Mar 18 13:28:13.692970 master-0 kubenswrapper[28504]: I0318 13:28:13.692927 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 18 13:28:13.693590 master-0 kubenswrapper[28504]: I0318 13:28:13.693559 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:13.696284 master-0 kubenswrapper[28504]: I0318 13:28:13.696226 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-b7cupts75hb7s" Mar 18 13:28:13.702267 master-0 kubenswrapper[28504]: I0318 13:28:13.702167 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk"] Mar 18 13:28:13.704119 master-0 kubenswrapper[28504]: I0318 13:28:13.704083 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:13.706547 master-0 kubenswrapper[28504]: I0318 13:28:13.706508 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 18 13:28:13.710046 master-0 kubenswrapper[28504]: I0318 13:28:13.709984 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 18 13:28:13.710213 master-0 kubenswrapper[28504]: I0318 13:28:13.710177 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 18 13:28:13.710477 master-0 kubenswrapper[28504]: I0318 13:28:13.710422 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 18 13:28:13.711195 master-0 kubenswrapper[28504]: I0318 13:28:13.711168 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 18 13:28:13.714278 master-0 kubenswrapper[28504]: I0318 13:28:13.714210 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 18 13:28:13.731018 master-0 kubenswrapper[28504]: I0318 13:28:13.730923 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-568b89d8b8-tppnt"] Mar 18 13:28:13.733765 master-0 kubenswrapper[28504]: I0318 13:28:13.733686 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:13.796185 master-0 kubenswrapper[28504]: I0318 13:28:13.751121 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5688f96659-j2jrm"] Mar 18 13:28:13.796185 master-0 kubenswrapper[28504]: I0318 13:28:13.751221 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-648866dd9c-ztkrd"] Mar 18 13:28:13.796185 master-0 kubenswrapper[28504]: I0318 13:28:13.751245 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk"] Mar 18 13:28:13.796185 master-0 kubenswrapper[28504]: I0318 13:28:13.751460 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" podUID="b79758b7-9129-496c-abec-80d455648454" containerName="metrics-server" containerID="cri-o://6e8ae0a076c41e3f18b14641ea28ac3afecdeb82b78e2928a3c0fb7c0dd943cb" gracePeriod=170 Mar 18 13:28:13.810965 master-0 kubenswrapper[28504]: I0318 13:28:13.806674 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 18 13:28:13.810965 master-0 kubenswrapper[28504]: I0318 13:28:13.806987 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 18 13:28:13.810965 master-0 kubenswrapper[28504]: I0318 13:28:13.807124 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 18 13:28:13.810965 master-0 kubenswrapper[28504]: I0318 13:28:13.807290 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 18 13:28:13.810965 master-0 kubenswrapper[28504]: I0318 13:28:13.807401 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 18 13:28:13.813838 master-0 kubenswrapper[28504]: I0318 13:28:13.811802 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-7vjlbemcpg6er" Mar 18 13:28:13.885375 master-0 kubenswrapper[28504]: I0318 13:28:13.884563 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-568b89d8b8-tppnt"] Mar 18 13:28:13.900397 master-0 kubenswrapper[28504]: I0318 13:28:13.900325 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0bcf9360-48a8-492e-93c3-ef39ecdaec04-metrics-client-ca\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:13.900625 master-0 kubenswrapper[28504]: I0318 13:28:13.900403 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9590a761-5b85-4145-b0f6-4675eba16998-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:13.900625 master-0 kubenswrapper[28504]: I0318 13:28:13.900437 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/9590a761-5b85-4145-b0f6-4675eba16998-secret-metrics-server-tls\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:13.900625 master-0 kubenswrapper[28504]: I0318 13:28:13.900469 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:13.900625 master-0 kubenswrapper[28504]: I0318 13:28:13.900496 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/9590a761-5b85-4145-b0f6-4675eba16998-metrics-server-audit-profiles\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:13.900625 master-0 kubenswrapper[28504]: I0318 13:28:13.900527 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr7gp\" (UniqueName: \"kubernetes.io/projected/0bcf9360-48a8-492e-93c3-ef39ecdaec04-kube-api-access-kr7gp\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:13.900625 master-0 kubenswrapper[28504]: I0318 13:28:13.900556 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7hgt\" (UniqueName: \"kubernetes.io/projected/9590a761-5b85-4145-b0f6-4675eba16998-kube-api-access-g7hgt\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:13.900625 master-0 kubenswrapper[28504]: I0318 13:28:13.900613 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/9590a761-5b85-4145-b0f6-4675eba16998-audit-log\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:13.900910 master-0 kubenswrapper[28504]: I0318 13:28:13.900642 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-grpc-tls\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:13.900910 master-0 kubenswrapper[28504]: I0318 13:28:13.900665 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-metrics-client-ca\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:13.900910 master-0 kubenswrapper[28504]: I0318 13:28:13.900691 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bcf9360-48a8-492e-93c3-ef39ecdaec04-serving-certs-ca-bundle\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:13.900910 master-0 kubenswrapper[28504]: I0318 13:28:13.900715 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:13.900910 master-0 kubenswrapper[28504]: I0318 13:28:13.900754 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:13.900910 master-0 kubenswrapper[28504]: I0318 13:28:13.900784 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9590a761-5b85-4145-b0f6-4675eba16998-secret-metrics-client-certs\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:13.900910 master-0 kubenswrapper[28504]: I0318 13:28:13.900806 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-thanos-querier-tls\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:13.900910 master-0 kubenswrapper[28504]: I0318 13:28:13.900835 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5nvl\" (UniqueName: \"kubernetes.io/projected/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-kube-api-access-c5nvl\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:13.900910 master-0 kubenswrapper[28504]: I0318 13:28:13.900863 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:13.900910 master-0 kubenswrapper[28504]: I0318 13:28:13.900887 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:13.901371 master-0 kubenswrapper[28504]: I0318 13:28:13.900918 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-secret-telemeter-client\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:13.901371 master-0 kubenswrapper[28504]: I0318 13:28:13.900960 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9590a761-5b85-4145-b0f6-4675eba16998-client-ca-bundle\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:13.901371 master-0 kubenswrapper[28504]: I0318 13:28:13.900994 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:13.901371 master-0 kubenswrapper[28504]: I0318 13:28:13.901027 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:13.901371 master-0 kubenswrapper[28504]: I0318 13:28:13.901070 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-federate-client-tls\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.001617 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.001685 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.001726 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-federate-client-tls\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.001751 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0bcf9360-48a8-492e-93c3-ef39ecdaec04-metrics-client-ca\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.001777 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9590a761-5b85-4145-b0f6-4675eba16998-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.001796 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/9590a761-5b85-4145-b0f6-4675eba16998-secret-metrics-server-tls\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.001815 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.001830 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/9590a761-5b85-4145-b0f6-4675eba16998-metrics-server-audit-profiles\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.001852 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr7gp\" (UniqueName: \"kubernetes.io/projected/0bcf9360-48a8-492e-93c3-ef39ecdaec04-kube-api-access-kr7gp\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.001871 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7hgt\" (UniqueName: \"kubernetes.io/projected/9590a761-5b85-4145-b0f6-4675eba16998-kube-api-access-g7hgt\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.001908 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/9590a761-5b85-4145-b0f6-4675eba16998-audit-log\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.001925 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-grpc-tls\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.001967 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-metrics-client-ca\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.001989 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bcf9360-48a8-492e-93c3-ef39ecdaec04-serving-certs-ca-bundle\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.002006 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.002029 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.002049 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9590a761-5b85-4145-b0f6-4675eba16998-secret-metrics-client-certs\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.002067 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-thanos-querier-tls\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.002086 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nvl\" (UniqueName: \"kubernetes.io/projected/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-kube-api-access-c5nvl\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.002104 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.002121 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.002139 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-secret-telemeter-client\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:14.002697 master-0 kubenswrapper[28504]: I0318 13:28:14.002159 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9590a761-5b85-4145-b0f6-4675eba16998-client-ca-bundle\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:14.003829 master-0 kubenswrapper[28504]: I0318 13:28:14.003302 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/9590a761-5b85-4145-b0f6-4675eba16998-audit-log\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:14.006980 master-0 kubenswrapper[28504]: I0318 13:28:14.006004 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9590a761-5b85-4145-b0f6-4675eba16998-client-ca-bundle\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:14.006980 master-0 kubenswrapper[28504]: I0318 13:28:14.006780 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:14.007740 master-0 kubenswrapper[28504]: I0318 13:28:14.007698 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/9590a761-5b85-4145-b0f6-4675eba16998-secret-metrics-server-tls\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:14.008772 master-0 kubenswrapper[28504]: I0318 13:28:14.008748 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0bcf9360-48a8-492e-93c3-ef39ecdaec04-metrics-client-ca\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:14.008883 master-0 kubenswrapper[28504]: I0318 13:28:14.008839 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9590a761-5b85-4145-b0f6-4675eba16998-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:14.013617 master-0 kubenswrapper[28504]: E0318 13:28:14.009841 28504 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 18 13:28:14.013617 master-0 kubenswrapper[28504]: E0318 13:28:14.009916 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls podName:0bcf9360-48a8-492e-93c3-ef39ecdaec04 nodeName:}" failed. No retries permitted until 2026-03-18 13:28:14.509898762 +0000 UTC m=+272.004704617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls") pod "telemeter-client-5f5f6c46c8-55vzk" (UID: "0bcf9360-48a8-492e-93c3-ef39ecdaec04") : secret "telemeter-client-tls" not found Mar 18 13:28:14.013617 master-0 kubenswrapper[28504]: I0318 13:28:14.010317 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/9590a761-5b85-4145-b0f6-4675eba16998-metrics-server-audit-profiles\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:14.013617 master-0 kubenswrapper[28504]: I0318 13:28:14.011657 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bcf9360-48a8-492e-93c3-ef39ecdaec04-serving-certs-ca-bundle\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:14.013617 master-0 kubenswrapper[28504]: I0318 13:28:14.012288 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-metrics-client-ca\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:14.013617 master-0 kubenswrapper[28504]: I0318 13:28:14.013411 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:14.015309 master-0 kubenswrapper[28504]: I0318 13:28:14.015281 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9590a761-5b85-4145-b0f6-4675eba16998-secret-metrics-client-certs\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:14.015776 master-0 kubenswrapper[28504]: I0318 13:28:14.015632 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:14.024764 master-0 kubenswrapper[28504]: I0318 13:28:14.024709 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:14.025109 master-0 kubenswrapper[28504]: I0318 13:28:14.025070 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-federate-client-tls\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:14.026018 master-0 kubenswrapper[28504]: I0318 13:28:14.025989 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:14.026347 master-0 kubenswrapper[28504]: I0318 13:28:14.026312 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-grpc-tls\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:14.026781 master-0 kubenswrapper[28504]: I0318 13:28:14.026567 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-secret-telemeter-client\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:14.037501 master-0 kubenswrapper[28504]: I0318 13:28:14.029472 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-thanos-querier-tls\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:14.047772 master-0 kubenswrapper[28504]: I0318 13:28:14.044891 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:14.047772 master-0 kubenswrapper[28504]: I0318 13:28:14.047511 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr7gp\" (UniqueName: \"kubernetes.io/projected/0bcf9360-48a8-492e-93c3-ef39ecdaec04-kube-api-access-kr7gp\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:14.049983 master-0 kubenswrapper[28504]: I0318 13:28:14.049891 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7hgt\" (UniqueName: \"kubernetes.io/projected/9590a761-5b85-4145-b0f6-4675eba16998-kube-api-access-g7hgt\") pod \"metrics-server-5688f96659-j2jrm\" (UID: \"9590a761-5b85-4145-b0f6-4675eba16998\") " pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:14.057730 master-0 kubenswrapper[28504]: I0318 13:28:14.057657 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5nvl\" (UniqueName: \"kubernetes.io/projected/d477ff80-0635-4f8e-acea-ec2fc42d5c9a-kube-api-access-c5nvl\") pod \"thanos-querier-568b89d8b8-tppnt\" (UID: \"d477ff80-0635-4f8e-acea-ec2fc42d5c9a\") " pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:14.123023 master-0 kubenswrapper[28504]: I0318 13:28:14.121348 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:14.186866 master-0 kubenswrapper[28504]: I0318 13:28:14.186816 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:14.512586 master-0 kubenswrapper[28504]: I0318 13:28:14.511855 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:14.512586 master-0 kubenswrapper[28504]: E0318 13:28:14.512079 28504 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 18 13:28:14.512586 master-0 kubenswrapper[28504]: E0318 13:28:14.512180 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls podName:0bcf9360-48a8-492e-93c3-ef39ecdaec04 nodeName:}" failed. No retries permitted until 2026-03-18 13:28:15.512143141 +0000 UTC m=+273.006948926 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls") pod "telemeter-client-5f5f6c46c8-55vzk" (UID: "0bcf9360-48a8-492e-93c3-ef39ecdaec04") : secret "telemeter-client-tls" not found Mar 18 13:28:14.614710 master-0 kubenswrapper[28504]: I0318 13:28:14.614633 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5688f96659-j2jrm"] Mar 18 13:28:14.620255 master-0 kubenswrapper[28504]: W0318 13:28:14.620100 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9590a761_5b85_4145_b0f6_4675eba16998.slice/crio-9e2f5d4984b364aff8788f111ce908d13874f0c416297c90a6b5f6072f97c925 WatchSource:0}: Error finding container 9e2f5d4984b364aff8788f111ce908d13874f0c416297c90a6b5f6072f97c925: Status 404 returned error can't find the container with id 9e2f5d4984b364aff8788f111ce908d13874f0c416297c90a6b5f6072f97c925 Mar 18 13:28:14.795727 master-0 kubenswrapper[28504]: I0318 13:28:14.795656 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-568b89d8b8-tppnt"] Mar 18 13:28:14.801969 master-0 kubenswrapper[28504]: W0318 13:28:14.801814 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd477ff80_0635_4f8e_acea_ec2fc42d5c9a.slice/crio-5aaaf0f52cc9d640e751d4d2fd982c193745ca3e53382a358a217593ff4b7c30 WatchSource:0}: Error finding container 5aaaf0f52cc9d640e751d4d2fd982c193745ca3e53382a358a217593ff4b7c30: Status 404 returned error can't find the container with id 5aaaf0f52cc9d640e751d4d2fd982c193745ca3e53382a358a217593ff4b7c30 Mar 18 13:28:15.148167 master-0 kubenswrapper[28504]: I0318 13:28:15.148103 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" event={"ID":"d477ff80-0635-4f8e-acea-ec2fc42d5c9a","Type":"ContainerStarted","Data":"5aaaf0f52cc9d640e751d4d2fd982c193745ca3e53382a358a217593ff4b7c30"} Mar 18 13:28:15.149969 master-0 kubenswrapper[28504]: I0318 13:28:15.149891 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" event={"ID":"9590a761-5b85-4145-b0f6-4675eba16998","Type":"ContainerStarted","Data":"e9c25bbacde6629ed1110ad84f795faf37864bb155663cfa5d5ecaf6c956d834"} Mar 18 13:28:15.150070 master-0 kubenswrapper[28504]: I0318 13:28:15.149968 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" event={"ID":"9590a761-5b85-4145-b0f6-4675eba16998","Type":"ContainerStarted","Data":"9e2f5d4984b364aff8788f111ce908d13874f0c416297c90a6b5f6072f97c925"} Mar 18 13:28:15.210006 master-0 kubenswrapper[28504]: I0318 13:28:15.209913 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" podStartSLOduration=2.209891339 podStartE2EDuration="2.209891339s" podCreationTimestamp="2026-03-18 13:28:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:28:15.202103696 +0000 UTC m=+272.696909491" watchObservedRunningTime="2026-03-18 13:28:15.209891339 +0000 UTC m=+272.704697114" Mar 18 13:28:15.531165 master-0 kubenswrapper[28504]: I0318 13:28:15.531088 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:15.531459 master-0 kubenswrapper[28504]: E0318 13:28:15.531411 28504 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 18 13:28:15.531535 master-0 kubenswrapper[28504]: E0318 13:28:15.531515 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls podName:0bcf9360-48a8-492e-93c3-ef39ecdaec04 nodeName:}" failed. No retries permitted until 2026-03-18 13:28:17.53149338 +0000 UTC m=+275.026299155 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls") pod "telemeter-client-5f5f6c46c8-55vzk" (UID: "0bcf9360-48a8-492e-93c3-ef39ecdaec04") : secret "telemeter-client-tls" not found Mar 18 13:28:17.542787 master-0 kubenswrapper[28504]: I0318 13:28:17.542723 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:17.543334 master-0 kubenswrapper[28504]: E0318 13:28:17.542952 28504 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 18 13:28:17.543334 master-0 kubenswrapper[28504]: E0318 13:28:17.543058 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls podName:0bcf9360-48a8-492e-93c3-ef39ecdaec04 nodeName:}" failed. No retries permitted until 2026-03-18 13:28:21.543035426 +0000 UTC m=+279.037841261 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls") pod "telemeter-client-5f5f6c46c8-55vzk" (UID: "0bcf9360-48a8-492e-93c3-ef39ecdaec04") : secret "telemeter-client-tls" not found Mar 18 13:28:19.261482 master-0 kubenswrapper[28504]: I0318 13:28:19.261425 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" event={"ID":"d477ff80-0635-4f8e-acea-ec2fc42d5c9a","Type":"ContainerStarted","Data":"b895f12bd2584ae7ee1878f2a17fc80464ec79366b0a45adf9463368390d54e4"} Mar 18 13:28:19.261482 master-0 kubenswrapper[28504]: I0318 13:28:19.261482 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" event={"ID":"d477ff80-0635-4f8e-acea-ec2fc42d5c9a","Type":"ContainerStarted","Data":"78f8e7df3a28bf0486289ea97d8467f767461056695dece5ca62b12bf9b0ad36"} Mar 18 13:28:19.262167 master-0 kubenswrapper[28504]: I0318 13:28:19.261497 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" event={"ID":"d477ff80-0635-4f8e-acea-ec2fc42d5c9a","Type":"ContainerStarted","Data":"93c2e76a7c7987a7446b18c41a3859fbee60cd8bb322ac8e341833aaec8d05f4"} Mar 18 13:28:21.277066 master-0 kubenswrapper[28504]: I0318 13:28:21.276981 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" event={"ID":"d477ff80-0635-4f8e-acea-ec2fc42d5c9a","Type":"ContainerStarted","Data":"fc2571539ce89dd1400f16447ad094bed100d65d031d8b1216bb1e8fb873b151"} Mar 18 13:28:21.277066 master-0 kubenswrapper[28504]: I0318 13:28:21.277039 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" event={"ID":"d477ff80-0635-4f8e-acea-ec2fc42d5c9a","Type":"ContainerStarted","Data":"ad75d4f14e3a0ee7ae8c3d95a28f7ee6f1baf47126f574976f257da1b2866175"} Mar 18 13:28:21.277066 master-0 kubenswrapper[28504]: I0318 13:28:21.277053 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" event={"ID":"d477ff80-0635-4f8e-acea-ec2fc42d5c9a","Type":"ContainerStarted","Data":"618e0c8b0225b0aa49e2bf46e79b6f6416e107c4b6d033b9f8c8a2021bab7c52"} Mar 18 13:28:21.277677 master-0 kubenswrapper[28504]: I0318 13:28:21.277155 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:21.583579 master-0 kubenswrapper[28504]: I0318 13:28:21.583486 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:21.583889 master-0 kubenswrapper[28504]: E0318 13:28:21.583657 28504 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 18 13:28:21.583979 master-0 kubenswrapper[28504]: E0318 13:28:21.583956 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls podName:0bcf9360-48a8-492e-93c3-ef39ecdaec04 nodeName:}" failed. No retries permitted until 2026-03-18 13:28:29.583914048 +0000 UTC m=+287.078719823 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls") pod "telemeter-client-5f5f6c46c8-55vzk" (UID: "0bcf9360-48a8-492e-93c3-ef39ecdaec04") : secret "telemeter-client-tls" not found Mar 18 13:28:24.196721 master-0 kubenswrapper[28504]: I0318 13:28:24.196634 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" Mar 18 13:28:24.245368 master-0 kubenswrapper[28504]: I0318 13:28:24.244911 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-568b89d8b8-tppnt" podStartSLOduration=5.61862962 podStartE2EDuration="11.244892034s" podCreationTimestamp="2026-03-18 13:28:13 +0000 UTC" firstStartedPulling="2026-03-18 13:28:14.806291693 +0000 UTC m=+272.301097468" lastFinishedPulling="2026-03-18 13:28:20.432554107 +0000 UTC m=+277.927359882" observedRunningTime="2026-03-18 13:28:21.299514537 +0000 UTC m=+278.794320312" watchObservedRunningTime="2026-03-18 13:28:24.244892034 +0000 UTC m=+281.739697809" Mar 18 13:28:29.608769 master-0 kubenswrapper[28504]: I0318 13:28:29.608665 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:29.609455 master-0 kubenswrapper[28504]: E0318 13:28:29.608888 28504 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 18 13:28:29.609455 master-0 kubenswrapper[28504]: E0318 13:28:29.609006 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls podName:0bcf9360-48a8-492e-93c3-ef39ecdaec04 nodeName:}" failed. No retries permitted until 2026-03-18 13:28:45.60898105 +0000 UTC m=+303.103786855 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls") pod "telemeter-client-5f5f6c46c8-55vzk" (UID: "0bcf9360-48a8-492e-93c3-ef39ecdaec04") : secret "telemeter-client-tls" not found Mar 18 13:28:32.643788 master-0 kubenswrapper[28504]: I0318 13:28:32.643735 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 13:28:32.647256 master-0 kubenswrapper[28504]: I0318 13:28:32.647196 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.652626 master-0 kubenswrapper[28504]: I0318 13:28:32.652588 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 18 13:28:32.652869 master-0 kubenswrapper[28504]: I0318 13:28:32.652818 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 18 13:28:32.653866 master-0 kubenswrapper[28504]: I0318 13:28:32.653806 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 18 13:28:32.653866 master-0 kubenswrapper[28504]: I0318 13:28:32.653827 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 18 13:28:32.654026 master-0 kubenswrapper[28504]: I0318 13:28:32.654010 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-1pumdjnmagqse" Mar 18 13:28:32.654082 master-0 kubenswrapper[28504]: I0318 13:28:32.654040 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 18 13:28:32.654652 master-0 kubenswrapper[28504]: I0318 13:28:32.654624 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 18 13:28:32.655176 master-0 kubenswrapper[28504]: I0318 13:28:32.655156 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 18 13:28:32.663367 master-0 kubenswrapper[28504]: I0318 13:28:32.663329 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 18 13:28:32.663556 master-0 kubenswrapper[28504]: I0318 13:28:32.663489 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 18 13:28:32.666584 master-0 kubenswrapper[28504]: I0318 13:28:32.666546 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 18 13:28:32.667348 master-0 kubenswrapper[28504]: I0318 13:28:32.667325 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 18 13:28:32.688339 master-0 kubenswrapper[28504]: I0318 13:28:32.688288 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 13:28:32.755660 master-0 kubenswrapper[28504]: I0318 13:28:32.755574 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-web-config\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.755660 master-0 kubenswrapper[28504]: I0318 13:28:32.755648 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.756085 master-0 kubenswrapper[28504]: I0318 13:28:32.755765 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.756085 master-0 kubenswrapper[28504]: I0318 13:28:32.755844 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2b69e842-e81a-46b7-b61f-5e2dca016a8d-config-out\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.756085 master-0 kubenswrapper[28504]: I0318 13:28:32.755917 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.756085 master-0 kubenswrapper[28504]: I0318 13:28:32.755974 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.756085 master-0 kubenswrapper[28504]: I0318 13:28:32.756052 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2b69e842-e81a-46b7-b61f-5e2dca016a8d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.756085 master-0 kubenswrapper[28504]: I0318 13:28:32.756078 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2b69e842-e81a-46b7-b61f-5e2dca016a8d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.756353 master-0 kubenswrapper[28504]: I0318 13:28:32.756126 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2gkh\" (UniqueName: \"kubernetes.io/projected/2b69e842-e81a-46b7-b61f-5e2dca016a8d-kube-api-access-j2gkh\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.756353 master-0 kubenswrapper[28504]: I0318 13:28:32.756229 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.756353 master-0 kubenswrapper[28504]: I0318 13:28:32.756272 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2b69e842-e81a-46b7-b61f-5e2dca016a8d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.756353 master-0 kubenswrapper[28504]: I0318 13:28:32.756321 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b69e842-e81a-46b7-b61f-5e2dca016a8d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.756353 master-0 kubenswrapper[28504]: I0318 13:28:32.756354 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2b69e842-e81a-46b7-b61f-5e2dca016a8d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.756566 master-0 kubenswrapper[28504]: I0318 13:28:32.756390 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-config\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.756566 master-0 kubenswrapper[28504]: I0318 13:28:32.756428 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b69e842-e81a-46b7-b61f-5e2dca016a8d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.756566 master-0 kubenswrapper[28504]: I0318 13:28:32.756504 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.756566 master-0 kubenswrapper[28504]: I0318 13:28:32.756539 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b69e842-e81a-46b7-b61f-5e2dca016a8d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.756729 master-0 kubenswrapper[28504]: I0318 13:28:32.756571 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.857311 master-0 kubenswrapper[28504]: I0318 13:28:32.857232 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b69e842-e81a-46b7-b61f-5e2dca016a8d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.857311 master-0 kubenswrapper[28504]: I0318 13:28:32.857306 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2b69e842-e81a-46b7-b61f-5e2dca016a8d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.857745 master-0 kubenswrapper[28504]: I0318 13:28:32.857700 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-config\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.857854 master-0 kubenswrapper[28504]: I0318 13:28:32.857837 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b69e842-e81a-46b7-b61f-5e2dca016a8d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.858010 master-0 kubenswrapper[28504]: I0318 13:28:32.857994 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.858126 master-0 kubenswrapper[28504]: I0318 13:28:32.858112 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b69e842-e81a-46b7-b61f-5e2dca016a8d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.858217 master-0 kubenswrapper[28504]: I0318 13:28:32.858202 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.858364 master-0 kubenswrapper[28504]: I0318 13:28:32.858346 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-web-config\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.858445 master-0 kubenswrapper[28504]: I0318 13:28:32.858412 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b69e842-e81a-46b7-b61f-5e2dca016a8d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.858519 master-0 kubenswrapper[28504]: I0318 13:28:32.858503 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.858613 master-0 kubenswrapper[28504]: I0318 13:28:32.858600 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.858705 master-0 kubenswrapper[28504]: I0318 13:28:32.858690 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2b69e842-e81a-46b7-b61f-5e2dca016a8d-config-out\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.858817 master-0 kubenswrapper[28504]: I0318 13:28:32.858800 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.858909 master-0 kubenswrapper[28504]: I0318 13:28:32.858895 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.859024 master-0 kubenswrapper[28504]: I0318 13:28:32.859011 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2b69e842-e81a-46b7-b61f-5e2dca016a8d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.859106 master-0 kubenswrapper[28504]: I0318 13:28:32.859093 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2b69e842-e81a-46b7-b61f-5e2dca016a8d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.859190 master-0 kubenswrapper[28504]: I0318 13:28:32.859178 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2gkh\" (UniqueName: \"kubernetes.io/projected/2b69e842-e81a-46b7-b61f-5e2dca016a8d-kube-api-access-j2gkh\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.859309 master-0 kubenswrapper[28504]: I0318 13:28:32.859280 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b69e842-e81a-46b7-b61f-5e2dca016a8d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.859387 master-0 kubenswrapper[28504]: I0318 13:28:32.859370 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.859490 master-0 kubenswrapper[28504]: I0318 13:28:32.859475 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2b69e842-e81a-46b7-b61f-5e2dca016a8d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.860695 master-0 kubenswrapper[28504]: I0318 13:28:32.860675 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2b69e842-e81a-46b7-b61f-5e2dca016a8d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.861155 master-0 kubenswrapper[28504]: I0318 13:28:32.861121 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2b69e842-e81a-46b7-b61f-5e2dca016a8d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.861676 master-0 kubenswrapper[28504]: I0318 13:28:32.861644 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2b69e842-e81a-46b7-b61f-5e2dca016a8d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.862255 master-0 kubenswrapper[28504]: I0318 13:28:32.862199 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b69e842-e81a-46b7-b61f-5e2dca016a8d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.862388 master-0 kubenswrapper[28504]: I0318 13:28:32.862359 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.863287 master-0 kubenswrapper[28504]: I0318 13:28:32.863235 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.864463 master-0 kubenswrapper[28504]: I0318 13:28:32.864428 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2b69e842-e81a-46b7-b61f-5e2dca016a8d-config-out\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.864773 master-0 kubenswrapper[28504]: I0318 13:28:32.864744 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.865019 master-0 kubenswrapper[28504]: I0318 13:28:32.864987 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-web-config\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.865332 master-0 kubenswrapper[28504]: I0318 13:28:32.865296 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.865529 master-0 kubenswrapper[28504]: I0318 13:28:32.865492 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.866758 master-0 kubenswrapper[28504]: I0318 13:28:32.866717 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-config\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.868442 master-0 kubenswrapper[28504]: I0318 13:28:32.868410 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2b69e842-e81a-46b7-b61f-5e2dca016a8d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.869577 master-0 kubenswrapper[28504]: I0318 13:28:32.869551 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.870779 master-0 kubenswrapper[28504]: I0318 13:28:32.870736 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2b69e842-e81a-46b7-b61f-5e2dca016a8d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.881537 master-0 kubenswrapper[28504]: I0318 13:28:32.881493 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2gkh\" (UniqueName: \"kubernetes.io/projected/2b69e842-e81a-46b7-b61f-5e2dca016a8d-kube-api-access-j2gkh\") pod \"prometheus-k8s-0\" (UID: \"2b69e842-e81a-46b7-b61f-5e2dca016a8d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:32.967492 master-0 kubenswrapper[28504]: I0318 13:28:32.967420 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:33.492027 master-0 kubenswrapper[28504]: I0318 13:28:33.491478 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 13:28:34.123511 master-0 kubenswrapper[28504]: I0318 13:28:34.123399 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:34.126372 master-0 kubenswrapper[28504]: I0318 13:28:34.126337 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:34.138818 master-0 kubenswrapper[28504]: I0318 13:28:34.138744 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:34.364125 master-0 kubenswrapper[28504]: I0318 13:28:34.364058 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2b69e842-e81a-46b7-b61f-5e2dca016a8d","Type":"ContainerStarted","Data":"abd2c17e7651484984f09a3f13d5cbfb97417433a3605d4e5a247057ae9aaedd"} Mar 18 13:28:34.368106 master-0 kubenswrapper[28504]: I0318 13:28:34.368057 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-5688f96659-j2jrm" Mar 18 13:28:36.386422 master-0 kubenswrapper[28504]: I0318 13:28:36.386360 28504 generic.go:334] "Generic (PLEG): container finished" podID="2b69e842-e81a-46b7-b61f-5e2dca016a8d" containerID="08550ea0f4fb32bfee391ea0222aeee022786795bda472e6598e0005045b5ecb" exitCode=0 Mar 18 13:28:36.387104 master-0 kubenswrapper[28504]: I0318 13:28:36.386463 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2b69e842-e81a-46b7-b61f-5e2dca016a8d","Type":"ContainerDied","Data":"08550ea0f4fb32bfee391ea0222aeee022786795bda472e6598e0005045b5ecb"} Mar 18 13:28:39.024839 master-0 kubenswrapper[28504]: I0318 13:28:39.024763 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-86cfd4f585-tfs7z" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" containerName="console" containerID="cri-o://bec036adef1fe13fb0495bdb2412957e76bd8c007c09ebfc6df820d5eece5300" gracePeriod=15 Mar 18 13:28:40.383511 master-0 kubenswrapper[28504]: I0318 13:28:40.383406 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-86cfd4f585-tfs7z_fe6cd387-db28-4db0-b933-ba58fcaf8f24/console/0.log" Mar 18 13:28:40.383511 master-0 kubenswrapper[28504]: I0318 13:28:40.383521 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:28:40.440375 master-0 kubenswrapper[28504]: I0318 13:28:40.440333 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-86cfd4f585-tfs7z_fe6cd387-db28-4db0-b933-ba58fcaf8f24/console/0.log" Mar 18 13:28:40.440572 master-0 kubenswrapper[28504]: I0318 13:28:40.440391 28504 generic.go:334] "Generic (PLEG): container finished" podID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" containerID="bec036adef1fe13fb0495bdb2412957e76bd8c007c09ebfc6df820d5eece5300" exitCode=2 Mar 18 13:28:40.440572 master-0 kubenswrapper[28504]: I0318 13:28:40.440437 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:28:40.440658 master-0 kubenswrapper[28504]: I0318 13:28:40.440448 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86cfd4f585-tfs7z" event={"ID":"fe6cd387-db28-4db0-b933-ba58fcaf8f24","Type":"ContainerDied","Data":"bec036adef1fe13fb0495bdb2412957e76bd8c007c09ebfc6df820d5eece5300"} Mar 18 13:28:40.440703 master-0 kubenswrapper[28504]: I0318 13:28:40.440666 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86cfd4f585-tfs7z" event={"ID":"fe6cd387-db28-4db0-b933-ba58fcaf8f24","Type":"ContainerDied","Data":"9a30dd194d8f1bf9917bc22908b4ef1f9d46e1509a2f94cd423b3dfc7087a162"} Mar 18 13:28:40.440737 master-0 kubenswrapper[28504]: I0318 13:28:40.440702 28504 scope.go:117] "RemoveContainer" containerID="bec036adef1fe13fb0495bdb2412957e76bd8c007c09ebfc6df820d5eece5300" Mar 18 13:28:40.444435 master-0 kubenswrapper[28504]: I0318 13:28:40.444383 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2b69e842-e81a-46b7-b61f-5e2dca016a8d","Type":"ContainerStarted","Data":"9392c64d989e3ef44fd5b1f773e8dd4ac12d5e62902bfa8eea490f00a2a33d26"} Mar 18 13:28:40.452014 master-0 kubenswrapper[28504]: I0318 13:28:40.451955 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-service-ca\") pod \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " Mar 18 13:28:40.452166 master-0 kubenswrapper[28504]: I0318 13:28:40.452141 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-oauth-config\") pod \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " Mar 18 13:28:40.452225 master-0 kubenswrapper[28504]: I0318 13:28:40.452196 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-config\") pod \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " Mar 18 13:28:40.452282 master-0 kubenswrapper[28504]: I0318 13:28:40.452241 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-trusted-ca-bundle\") pod \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " Mar 18 13:28:40.452282 master-0 kubenswrapper[28504]: I0318 13:28:40.452276 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-serving-cert\") pod \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " Mar 18 13:28:40.452350 master-0 kubenswrapper[28504]: I0318 13:28:40.452306 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-oauth-serving-cert\") pod \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " Mar 18 13:28:40.452350 master-0 kubenswrapper[28504]: I0318 13:28:40.452328 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgnpw\" (UniqueName: \"kubernetes.io/projected/fe6cd387-db28-4db0-b933-ba58fcaf8f24-kube-api-access-hgnpw\") pod \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\" (UID: \"fe6cd387-db28-4db0-b933-ba58fcaf8f24\") " Mar 18 13:28:40.454245 master-0 kubenswrapper[28504]: I0318 13:28:40.454208 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "fe6cd387-db28-4db0-b933-ba58fcaf8f24" (UID: "fe6cd387-db28-4db0-b933-ba58fcaf8f24"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:28:40.454763 master-0 kubenswrapper[28504]: I0318 13:28:40.454730 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "fe6cd387-db28-4db0-b933-ba58fcaf8f24" (UID: "fe6cd387-db28-4db0-b933-ba58fcaf8f24"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:28:40.454914 master-0 kubenswrapper[28504]: I0318 13:28:40.454794 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-config" (OuterVolumeSpecName: "console-config") pod "fe6cd387-db28-4db0-b933-ba58fcaf8f24" (UID: "fe6cd387-db28-4db0-b933-ba58fcaf8f24"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:28:40.455204 master-0 kubenswrapper[28504]: I0318 13:28:40.455175 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-service-ca" (OuterVolumeSpecName: "service-ca") pod "fe6cd387-db28-4db0-b933-ba58fcaf8f24" (UID: "fe6cd387-db28-4db0-b933-ba58fcaf8f24"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:28:40.462042 master-0 kubenswrapper[28504]: I0318 13:28:40.457129 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "fe6cd387-db28-4db0-b933-ba58fcaf8f24" (UID: "fe6cd387-db28-4db0-b933-ba58fcaf8f24"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:28:40.462042 master-0 kubenswrapper[28504]: I0318 13:28:40.457417 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "fe6cd387-db28-4db0-b933-ba58fcaf8f24" (UID: "fe6cd387-db28-4db0-b933-ba58fcaf8f24"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:28:40.462042 master-0 kubenswrapper[28504]: I0318 13:28:40.458422 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe6cd387-db28-4db0-b933-ba58fcaf8f24-kube-api-access-hgnpw" (OuterVolumeSpecName: "kube-api-access-hgnpw") pod "fe6cd387-db28-4db0-b933-ba58fcaf8f24" (UID: "fe6cd387-db28-4db0-b933-ba58fcaf8f24"). InnerVolumeSpecName "kube-api-access-hgnpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:28:40.470962 master-0 kubenswrapper[28504]: I0318 13:28:40.467589 28504 scope.go:117] "RemoveContainer" containerID="bec036adef1fe13fb0495bdb2412957e76bd8c007c09ebfc6df820d5eece5300" Mar 18 13:28:40.473109 master-0 kubenswrapper[28504]: E0318 13:28:40.472981 28504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bec036adef1fe13fb0495bdb2412957e76bd8c007c09ebfc6df820d5eece5300\": container with ID starting with bec036adef1fe13fb0495bdb2412957e76bd8c007c09ebfc6df820d5eece5300 not found: ID does not exist" containerID="bec036adef1fe13fb0495bdb2412957e76bd8c007c09ebfc6df820d5eece5300" Mar 18 13:28:40.473109 master-0 kubenswrapper[28504]: I0318 13:28:40.473061 28504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bec036adef1fe13fb0495bdb2412957e76bd8c007c09ebfc6df820d5eece5300"} err="failed to get container status \"bec036adef1fe13fb0495bdb2412957e76bd8c007c09ebfc6df820d5eece5300\": rpc error: code = NotFound desc = could not find container \"bec036adef1fe13fb0495bdb2412957e76bd8c007c09ebfc6df820d5eece5300\": container with ID starting with bec036adef1fe13fb0495bdb2412957e76bd8c007c09ebfc6df820d5eece5300 not found: ID does not exist" Mar 18 13:28:40.554265 master-0 kubenswrapper[28504]: I0318 13:28:40.554220 28504 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:40.554372 master-0 kubenswrapper[28504]: I0318 13:28:40.554267 28504 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:40.554372 master-0 kubenswrapper[28504]: I0318 13:28:40.554284 28504 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:40.554372 master-0 kubenswrapper[28504]: I0318 13:28:40.554297 28504 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:40.554372 master-0 kubenswrapper[28504]: I0318 13:28:40.554308 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgnpw\" (UniqueName: \"kubernetes.io/projected/fe6cd387-db28-4db0-b933-ba58fcaf8f24-kube-api-access-hgnpw\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:40.554372 master-0 kubenswrapper[28504]: I0318 13:28:40.554317 28504 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fe6cd387-db28-4db0-b933-ba58fcaf8f24-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:40.554372 master-0 kubenswrapper[28504]: I0318 13:28:40.554327 28504 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fe6cd387-db28-4db0-b933-ba58fcaf8f24-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:41.693036 master-0 kubenswrapper[28504]: E0318 13:28:41.574959 28504 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2dpn1smcfbjnb: secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:28:41.693036 master-0 kubenswrapper[28504]: E0318 13:28:41.575082 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle podName:b79758b7-9129-496c-abec-80d455648454 nodeName:}" failed. No retries permitted until 2026-03-18 13:29:45.575049073 +0000 UTC m=+363.069854848 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle") pod "metrics-server-648866dd9c-ztkrd" (UID: "b79758b7-9129-496c-abec-80d455648454") : secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:28:41.693036 master-0 kubenswrapper[28504]: I0318 13:28:41.594611 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2b69e842-e81a-46b7-b61f-5e2dca016a8d","Type":"ContainerStarted","Data":"8a6d091684445ebf61706d71c8ad54e4585a378de5782b68b2b11955cf11e07e"} Mar 18 13:28:41.693036 master-0 kubenswrapper[28504]: I0318 13:28:41.594677 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2b69e842-e81a-46b7-b61f-5e2dca016a8d","Type":"ContainerStarted","Data":"33e3b15f0aa36391703b3dedb79d13adc0bad895cf8b2639756a8d4e6ba2eebe"} Mar 18 13:28:42.607503 master-0 kubenswrapper[28504]: I0318 13:28:42.607373 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2b69e842-e81a-46b7-b61f-5e2dca016a8d","Type":"ContainerStarted","Data":"b44d7d84685c9b8c0c8ef7217050f9c290901924206a9f4d56db4b48619de79b"} Mar 18 13:28:42.607503 master-0 kubenswrapper[28504]: I0318 13:28:42.607432 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2b69e842-e81a-46b7-b61f-5e2dca016a8d","Type":"ContainerStarted","Data":"c8c252e5b7a8b9d89e5c9ebdeafde184aaf72049a0d4947651970d87fe7d502b"} Mar 18 13:28:42.607503 master-0 kubenswrapper[28504]: I0318 13:28:42.607447 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2b69e842-e81a-46b7-b61f-5e2dca016a8d","Type":"ContainerStarted","Data":"f883819edd781a68adebf0d80e2ab59567c967102e3792abc19ef4e4f390f75b"} Mar 18 13:28:42.642516 master-0 kubenswrapper[28504]: I0318 13:28:42.642424 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.009726836 podStartE2EDuration="10.64240402s" podCreationTimestamp="2026-03-18 13:28:32 +0000 UTC" firstStartedPulling="2026-03-18 13:28:33.503238039 +0000 UTC m=+290.998043814" lastFinishedPulling="2026-03-18 13:28:40.135915222 +0000 UTC m=+297.630720998" observedRunningTime="2026-03-18 13:28:42.63609739 +0000 UTC m=+300.130903165" watchObservedRunningTime="2026-03-18 13:28:42.64240402 +0000 UTC m=+300.137209795" Mar 18 13:28:42.716095 master-0 kubenswrapper[28504]: I0318 13:28:42.716042 28504 kubelet.go:1505] "Image garbage collection succeeded" Mar 18 13:28:42.967631 master-0 kubenswrapper[28504]: I0318 13:28:42.967556 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:45.668875 master-0 kubenswrapper[28504]: I0318 13:28:45.668810 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:45.672750 master-0 kubenswrapper[28504]: I0318 13:28:45.672703 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/0bcf9360-48a8-492e-93c3-ef39ecdaec04-telemeter-client-tls\") pod \"telemeter-client-5f5f6c46c8-55vzk\" (UID: \"0bcf9360-48a8-492e-93c3-ef39ecdaec04\") " pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:45.944958 master-0 kubenswrapper[28504]: I0318 13:28:45.944877 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" Mar 18 13:28:46.834330 master-0 kubenswrapper[28504]: I0318 13:28:46.834263 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk"] Mar 18 13:28:47.655126 master-0 kubenswrapper[28504]: I0318 13:28:47.655080 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" event={"ID":"0bcf9360-48a8-492e-93c3-ef39ecdaec04","Type":"ContainerStarted","Data":"2107cf42491d33b6f75bb5108526134fdfcdf031c29f88daea50d686fd256706"} Mar 18 13:28:50.683793 master-0 kubenswrapper[28504]: I0318 13:28:50.683743 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5f5f6c46c8-55vzk_0bcf9360-48a8-492e-93c3-ef39ecdaec04/telemeter-client/0.log" Mar 18 13:28:50.684384 master-0 kubenswrapper[28504]: I0318 13:28:50.683807 28504 generic.go:334] "Generic (PLEG): container finished" podID="0bcf9360-48a8-492e-93c3-ef39ecdaec04" containerID="502eafcadb96188d7a46b6983cf7ecfca5cb67177be6e9f1912700299b4b5471" exitCode=1 Mar 18 13:28:50.684384 master-0 kubenswrapper[28504]: I0318 13:28:50.683842 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" event={"ID":"0bcf9360-48a8-492e-93c3-ef39ecdaec04","Type":"ContainerStarted","Data":"28f88c513401e9282e8cbc57f4d1342586c2fb799d15f196b2056f54fead5871"} Mar 18 13:28:50.684384 master-0 kubenswrapper[28504]: I0318 13:28:50.683869 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" event={"ID":"0bcf9360-48a8-492e-93c3-ef39ecdaec04","Type":"ContainerStarted","Data":"68ff129d02f59ccc3c29ebf21cf9bbccc873b5951c93bf289d542b246e7972f7"} Mar 18 13:28:50.684384 master-0 kubenswrapper[28504]: I0318 13:28:50.683878 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" event={"ID":"0bcf9360-48a8-492e-93c3-ef39ecdaec04","Type":"ContainerDied","Data":"502eafcadb96188d7a46b6983cf7ecfca5cb67177be6e9f1912700299b4b5471"} Mar 18 13:28:50.684577 master-0 kubenswrapper[28504]: I0318 13:28:50.684503 28504 scope.go:117] "RemoveContainer" containerID="502eafcadb96188d7a46b6983cf7ecfca5cb67177be6e9f1912700299b4b5471" Mar 18 13:28:51.011885 master-0 kubenswrapper[28504]: I0318 13:28:51.011799 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 13:28:51.012136 master-0 kubenswrapper[28504]: E0318 13:28:51.012108 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" containerName="console" Mar 18 13:28:51.012136 master-0 kubenswrapper[28504]: I0318 13:28:51.012123 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" containerName="console" Mar 18 13:28:51.012330 master-0 kubenswrapper[28504]: I0318 13:28:51.012299 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" containerName="console" Mar 18 13:28:51.014435 master-0 kubenswrapper[28504]: I0318 13:28:51.014381 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.017985 master-0 kubenswrapper[28504]: I0318 13:28:51.017947 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 18 13:28:51.018686 master-0 kubenswrapper[28504]: I0318 13:28:51.018666 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 18 13:28:51.018803 master-0 kubenswrapper[28504]: I0318 13:28:51.018784 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 18 13:28:51.018954 master-0 kubenswrapper[28504]: I0318 13:28:51.018887 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 18 13:28:51.019122 master-0 kubenswrapper[28504]: I0318 13:28:51.018985 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 18 13:28:51.019648 master-0 kubenswrapper[28504]: I0318 13:28:51.019623 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 18 13:28:51.019733 master-0 kubenswrapper[28504]: I0318 13:28:51.019629 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 18 13:28:51.026285 master-0 kubenswrapper[28504]: I0318 13:28:51.026209 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 18 13:28:51.043229 master-0 kubenswrapper[28504]: I0318 13:28:51.043103 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 13:28:51.115957 master-0 kubenswrapper[28504]: I0318 13:28:51.115825 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/dad1d337-f09f-4479-831b-d1e02f38148f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.115957 master-0 kubenswrapper[28504]: I0318 13:28:51.115897 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dad1d337-f09f-4479-831b-d1e02f38148f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.115957 master-0 kubenswrapper[28504]: I0318 13:28:51.115928 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.116404 master-0 kubenswrapper[28504]: I0318 13:28:51.115989 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dad1d337-f09f-4479-831b-d1e02f38148f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.116404 master-0 kubenswrapper[28504]: I0318 13:28:51.116172 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-web-config\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.116404 master-0 kubenswrapper[28504]: I0318 13:28:51.116223 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dad1d337-f09f-4479-831b-d1e02f38148f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.116404 master-0 kubenswrapper[28504]: I0318 13:28:51.116284 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.116404 master-0 kubenswrapper[28504]: I0318 13:28:51.116333 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.116404 master-0 kubenswrapper[28504]: I0318 13:28:51.116405 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.116579 master-0 kubenswrapper[28504]: I0318 13:28:51.116423 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dad1d337-f09f-4479-831b-d1e02f38148f-config-out\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.116579 master-0 kubenswrapper[28504]: I0318 13:28:51.116441 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drst9\" (UniqueName: \"kubernetes.io/projected/dad1d337-f09f-4479-831b-d1e02f38148f-kube-api-access-drst9\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.116579 master-0 kubenswrapper[28504]: I0318 13:28:51.116459 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-config-volume\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.218323 master-0 kubenswrapper[28504]: I0318 13:28:51.218265 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-config-volume\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.218625 master-0 kubenswrapper[28504]: I0318 13:28:51.218353 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/dad1d337-f09f-4479-831b-d1e02f38148f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.218625 master-0 kubenswrapper[28504]: I0318 13:28:51.218374 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dad1d337-f09f-4479-831b-d1e02f38148f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.218625 master-0 kubenswrapper[28504]: I0318 13:28:51.218408 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.218625 master-0 kubenswrapper[28504]: I0318 13:28:51.218429 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dad1d337-f09f-4479-831b-d1e02f38148f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.218625 master-0 kubenswrapper[28504]: I0318 13:28:51.218463 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dad1d337-f09f-4479-831b-d1e02f38148f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.218625 master-0 kubenswrapper[28504]: I0318 13:28:51.218479 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-web-config\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.218625 master-0 kubenswrapper[28504]: I0318 13:28:51.218543 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.218625 master-0 kubenswrapper[28504]: I0318 13:28:51.218576 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.218625 master-0 kubenswrapper[28504]: I0318 13:28:51.218616 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.219114 master-0 kubenswrapper[28504]: I0318 13:28:51.219092 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dad1d337-f09f-4479-831b-d1e02f38148f-config-out\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.219165 master-0 kubenswrapper[28504]: I0318 13:28:51.219120 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drst9\" (UniqueName: \"kubernetes.io/projected/dad1d337-f09f-4479-831b-d1e02f38148f-kube-api-access-drst9\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.221078 master-0 kubenswrapper[28504]: I0318 13:28:51.220641 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/dad1d337-f09f-4479-831b-d1e02f38148f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.221568 master-0 kubenswrapper[28504]: I0318 13:28:51.221503 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dad1d337-f09f-4479-831b-d1e02f38148f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.222163 master-0 kubenswrapper[28504]: I0318 13:28:51.222132 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dad1d337-f09f-4479-831b-d1e02f38148f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.226486 master-0 kubenswrapper[28504]: I0318 13:28:51.223584 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dad1d337-f09f-4479-831b-d1e02f38148f-config-out\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.226486 master-0 kubenswrapper[28504]: I0318 13:28:51.224381 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-config-volume\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.226486 master-0 kubenswrapper[28504]: I0318 13:28:51.224497 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.226486 master-0 kubenswrapper[28504]: I0318 13:28:51.225596 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.226486 master-0 kubenswrapper[28504]: I0318 13:28:51.226437 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dad1d337-f09f-4479-831b-d1e02f38148f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.227310 master-0 kubenswrapper[28504]: I0318 13:28:51.227119 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.227310 master-0 kubenswrapper[28504]: I0318 13:28:51.227261 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-web-config\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.231089 master-0 kubenswrapper[28504]: I0318 13:28:51.231029 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/dad1d337-f09f-4479-831b-d1e02f38148f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.241044 master-0 kubenswrapper[28504]: I0318 13:28:51.238965 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drst9\" (UniqueName: \"kubernetes.io/projected/dad1d337-f09f-4479-831b-d1e02f38148f-kube-api-access-drst9\") pod \"alertmanager-main-0\" (UID: \"dad1d337-f09f-4479-831b-d1e02f38148f\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.331049 master-0 kubenswrapper[28504]: I0318 13:28:51.330876 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:28:51.698329 master-0 kubenswrapper[28504]: I0318 13:28:51.698277 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5f5f6c46c8-55vzk_0bcf9360-48a8-492e-93c3-ef39ecdaec04/telemeter-client/1.log" Mar 18 13:28:51.701088 master-0 kubenswrapper[28504]: I0318 13:28:51.701028 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5f5f6c46c8-55vzk_0bcf9360-48a8-492e-93c3-ef39ecdaec04/telemeter-client/0.log" Mar 18 13:28:51.701413 master-0 kubenswrapper[28504]: I0318 13:28:51.701381 28504 generic.go:334] "Generic (PLEG): container finished" podID="0bcf9360-48a8-492e-93c3-ef39ecdaec04" containerID="74a8f9dba69d0bf81c4676c452aae93f3e8fe050359d2af712dcff51052b9b5e" exitCode=1 Mar 18 13:28:51.701543 master-0 kubenswrapper[28504]: I0318 13:28:51.701491 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" event={"ID":"0bcf9360-48a8-492e-93c3-ef39ecdaec04","Type":"ContainerDied","Data":"74a8f9dba69d0bf81c4676c452aae93f3e8fe050359d2af712dcff51052b9b5e"} Mar 18 13:28:51.701609 master-0 kubenswrapper[28504]: I0318 13:28:51.701565 28504 scope.go:117] "RemoveContainer" containerID="502eafcadb96188d7a46b6983cf7ecfca5cb67177be6e9f1912700299b4b5471" Mar 18 13:28:51.703428 master-0 kubenswrapper[28504]: I0318 13:28:51.702050 28504 scope.go:117] "RemoveContainer" containerID="74a8f9dba69d0bf81c4676c452aae93f3e8fe050359d2af712dcff51052b9b5e" Mar 18 13:28:51.703428 master-0 kubenswrapper[28504]: E0318 13:28:51.702331 28504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"telemeter-client\" with CrashLoopBackOff: \"back-off 10s restarting failed container=telemeter-client pod=telemeter-client-5f5f6c46c8-55vzk_openshift-monitoring(0bcf9360-48a8-492e-93c3-ef39ecdaec04)\"" pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" podUID="0bcf9360-48a8-492e-93c3-ef39ecdaec04" Mar 18 13:28:51.779690 master-0 kubenswrapper[28504]: W0318 13:28:51.779636 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddad1d337_f09f_4479_831b_d1e02f38148f.slice/crio-a3d3c075479cb936c2b26238d91237a965c21242dec715062679f5c3e814df50 WatchSource:0}: Error finding container a3d3c075479cb936c2b26238d91237a965c21242dec715062679f5c3e814df50: Status 404 returned error can't find the container with id a3d3c075479cb936c2b26238d91237a965c21242dec715062679f5c3e814df50 Mar 18 13:28:51.784445 master-0 kubenswrapper[28504]: I0318 13:28:51.784384 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 13:28:52.726643 master-0 kubenswrapper[28504]: I0318 13:28:52.726540 28504 generic.go:334] "Generic (PLEG): container finished" podID="dad1d337-f09f-4479-831b-d1e02f38148f" containerID="f35f3f2f64fceda98cea173a989cf2cbf7282644fd0835910a6479493a95435a" exitCode=0 Mar 18 13:28:52.726643 master-0 kubenswrapper[28504]: I0318 13:28:52.726642 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"dad1d337-f09f-4479-831b-d1e02f38148f","Type":"ContainerDied","Data":"f35f3f2f64fceda98cea173a989cf2cbf7282644fd0835910a6479493a95435a"} Mar 18 13:28:52.727455 master-0 kubenswrapper[28504]: I0318 13:28:52.726671 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"dad1d337-f09f-4479-831b-d1e02f38148f","Type":"ContainerStarted","Data":"a3d3c075479cb936c2b26238d91237a965c21242dec715062679f5c3e814df50"} Mar 18 13:28:52.732072 master-0 kubenswrapper[28504]: I0318 13:28:52.732027 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5f5f6c46c8-55vzk_0bcf9360-48a8-492e-93c3-ef39ecdaec04/telemeter-client/1.log" Mar 18 13:28:52.733738 master-0 kubenswrapper[28504]: I0318 13:28:52.733709 28504 scope.go:117] "RemoveContainer" containerID="74a8f9dba69d0bf81c4676c452aae93f3e8fe050359d2af712dcff51052b9b5e" Mar 18 13:28:52.733966 master-0 kubenswrapper[28504]: E0318 13:28:52.733927 28504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"telemeter-client\" with CrashLoopBackOff: \"back-off 10s restarting failed container=telemeter-client pod=telemeter-client-5f5f6c46c8-55vzk_openshift-monitoring(0bcf9360-48a8-492e-93c3-ef39ecdaec04)\"" pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" podUID="0bcf9360-48a8-492e-93c3-ef39ecdaec04" Mar 18 13:28:55.765454 master-0 kubenswrapper[28504]: I0318 13:28:55.765392 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"dad1d337-f09f-4479-831b-d1e02f38148f","Type":"ContainerStarted","Data":"2d8b4c9142c49dae3818b930f488b588de9c3cb900aed66a774507db15243b50"} Mar 18 13:28:55.765454 master-0 kubenswrapper[28504]: I0318 13:28:55.765450 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"dad1d337-f09f-4479-831b-d1e02f38148f","Type":"ContainerStarted","Data":"b916920bc4a31ba29db6631d2e2b2cf18b9d47abaabbd914f8ad79661cc0d36c"} Mar 18 13:28:55.765454 master-0 kubenswrapper[28504]: I0318 13:28:55.765461 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"dad1d337-f09f-4479-831b-d1e02f38148f","Type":"ContainerStarted","Data":"c0850bc9fb4f609957b54fba8b8621ea65e3ff85e1a639933a5257c930f43a9e"} Mar 18 13:28:55.765454 master-0 kubenswrapper[28504]: I0318 13:28:55.765472 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"dad1d337-f09f-4479-831b-d1e02f38148f","Type":"ContainerStarted","Data":"45c8e8f221fc72676feaed1508cfabb8f67759de8c69049621b70a948a0e81b0"} Mar 18 13:28:55.766236 master-0 kubenswrapper[28504]: I0318 13:28:55.765482 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"dad1d337-f09f-4479-831b-d1e02f38148f","Type":"ContainerStarted","Data":"b0646c86805dc9448972d28a7aa625637dd4287edc7dd7ab33ba90aa2cdc5712"} Mar 18 13:28:55.766236 master-0 kubenswrapper[28504]: I0318 13:28:55.765493 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"dad1d337-f09f-4479-831b-d1e02f38148f","Type":"ContainerStarted","Data":"d22e5902dddd3b77ee81b990e34109e8fc66581d03cf337130b2d8d6e8097dfa"} Mar 18 13:28:55.816566 master-0 kubenswrapper[28504]: I0318 13:28:55.816468 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.885394804 podStartE2EDuration="5.816442967s" podCreationTimestamp="2026-03-18 13:28:50 +0000 UTC" firstStartedPulling="2026-03-18 13:28:52.728036775 +0000 UTC m=+310.222842550" lastFinishedPulling="2026-03-18 13:28:54.659084938 +0000 UTC m=+312.153890713" observedRunningTime="2026-03-18 13:28:55.810037864 +0000 UTC m=+313.304843659" watchObservedRunningTime="2026-03-18 13:28:55.816442967 +0000 UTC m=+313.311248742" Mar 18 13:29:06.749694 master-0 kubenswrapper[28504]: I0318 13:29:06.749639 28504 scope.go:117] "RemoveContainer" containerID="74a8f9dba69d0bf81c4676c452aae93f3e8fe050359d2af712dcff51052b9b5e" Mar 18 13:29:07.007401 master-0 kubenswrapper[28504]: I0318 13:29:07.007280 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5f5f6c46c8-55vzk_0bcf9360-48a8-492e-93c3-ef39ecdaec04/telemeter-client/1.log" Mar 18 13:29:07.008032 master-0 kubenswrapper[28504]: I0318 13:29:07.007978 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" event={"ID":"0bcf9360-48a8-492e-93c3-ef39ecdaec04","Type":"ContainerStarted","Data":"e3c040e345c99023b37c6eae3e534d3c099ea79d227095c6df0393db47ef3b3a"} Mar 18 13:29:07.049582 master-0 kubenswrapper[28504]: I0318 13:29:07.049501 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-5f5f6c46c8-55vzk" podStartSLOduration=50.871847371 podStartE2EDuration="54.049478415s" podCreationTimestamp="2026-03-18 13:28:13 +0000 UTC" firstStartedPulling="2026-03-18 13:28:46.839388019 +0000 UTC m=+304.334193794" lastFinishedPulling="2026-03-18 13:28:50.017019063 +0000 UTC m=+307.511824838" observedRunningTime="2026-03-18 13:29:07.039070507 +0000 UTC m=+324.533876302" watchObservedRunningTime="2026-03-18 13:29:07.049478415 +0000 UTC m=+324.544284200" Mar 18 13:29:07.711618 master-0 kubenswrapper[28504]: I0318 13:29:07.711565 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-57ccc4885-h97bt"] Mar 18 13:29:07.712601 master-0 kubenswrapper[28504]: I0318 13:29:07.712573 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:07.729118 master-0 kubenswrapper[28504]: I0318 13:29:07.729060 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57ccc4885-h97bt"] Mar 18 13:29:07.909157 master-0 kubenswrapper[28504]: I0318 13:29:07.909089 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-trusted-ca-bundle\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:07.909608 master-0 kubenswrapper[28504]: I0318 13:29:07.909182 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-serving-cert\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:07.909608 master-0 kubenswrapper[28504]: I0318 13:29:07.909309 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-oauth-config\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:07.909608 master-0 kubenswrapper[28504]: I0318 13:29:07.909419 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-oauth-serving-cert\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:07.909608 master-0 kubenswrapper[28504]: I0318 13:29:07.909477 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-service-ca\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:07.909608 master-0 kubenswrapper[28504]: I0318 13:29:07.909594 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfpzt\" (UniqueName: \"kubernetes.io/projected/83a66ab9-3aee-4035-92ba-2be81be6c4fd-kube-api-access-xfpzt\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:07.909845 master-0 kubenswrapper[28504]: I0318 13:29:07.909634 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-config\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:08.011564 master-0 kubenswrapper[28504]: I0318 13:29:08.011423 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-trusted-ca-bundle\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:08.011779 master-0 kubenswrapper[28504]: I0318 13:29:08.011687 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-serving-cert\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:08.011779 master-0 kubenswrapper[28504]: I0318 13:29:08.011726 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-oauth-config\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:08.012596 master-0 kubenswrapper[28504]: I0318 13:29:08.012558 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-trusted-ca-bundle\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:08.012840 master-0 kubenswrapper[28504]: I0318 13:29:08.012809 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-oauth-serving-cert\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:08.012912 master-0 kubenswrapper[28504]: I0318 13:29:08.012872 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-oauth-serving-cert\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:08.012983 master-0 kubenswrapper[28504]: I0318 13:29:08.012917 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-service-ca\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:08.013590 master-0 kubenswrapper[28504]: I0318 13:29:08.013566 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-service-ca\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:08.013667 master-0 kubenswrapper[28504]: I0318 13:29:08.013657 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfpzt\" (UniqueName: \"kubernetes.io/projected/83a66ab9-3aee-4035-92ba-2be81be6c4fd-kube-api-access-xfpzt\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:08.013703 master-0 kubenswrapper[28504]: I0318 13:29:08.013685 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-config\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:08.016812 master-0 kubenswrapper[28504]: I0318 13:29:08.014536 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-config\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:08.016812 master-0 kubenswrapper[28504]: I0318 13:29:08.015550 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-serving-cert\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:08.024021 master-0 kubenswrapper[28504]: I0318 13:29:08.017874 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-oauth-config\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:08.035245 master-0 kubenswrapper[28504]: I0318 13:29:08.035197 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfpzt\" (UniqueName: \"kubernetes.io/projected/83a66ab9-3aee-4035-92ba-2be81be6c4fd-kube-api-access-xfpzt\") pod \"console-57ccc4885-h97bt\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:08.330689 master-0 kubenswrapper[28504]: I0318 13:29:08.330508 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:09.177491 master-0 kubenswrapper[28504]: I0318 13:29:09.177416 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57ccc4885-h97bt"] Mar 18 13:29:09.179904 master-0 kubenswrapper[28504]: W0318 13:29:09.179871 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83a66ab9_3aee_4035_92ba_2be81be6c4fd.slice/crio-0bb1204ea76896fab40affb21f68cdc0af2d710eb112bdad25ca8642c9bfa363 WatchSource:0}: Error finding container 0bb1204ea76896fab40affb21f68cdc0af2d710eb112bdad25ca8642c9bfa363: Status 404 returned error can't find the container with id 0bb1204ea76896fab40affb21f68cdc0af2d710eb112bdad25ca8642c9bfa363 Mar 18 13:29:10.032006 master-0 kubenswrapper[28504]: I0318 13:29:10.031271 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57ccc4885-h97bt" event={"ID":"83a66ab9-3aee-4035-92ba-2be81be6c4fd","Type":"ContainerStarted","Data":"fd03aa61f6a4f5da369c8b6ae2c794fe10b72ff6e16beb962eb6cf6df9269fd2"} Mar 18 13:29:10.032006 master-0 kubenswrapper[28504]: I0318 13:29:10.031338 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57ccc4885-h97bt" event={"ID":"83a66ab9-3aee-4035-92ba-2be81be6c4fd","Type":"ContainerStarted","Data":"0bb1204ea76896fab40affb21f68cdc0af2d710eb112bdad25ca8642c9bfa363"} Mar 18 13:29:10.144069 master-0 kubenswrapper[28504]: I0318 13:29:10.143969 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-57ccc4885-h97bt" podStartSLOduration=3.143930931 podStartE2EDuration="3.143930931s" podCreationTimestamp="2026-03-18 13:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:29:10.140418271 +0000 UTC m=+327.635224046" watchObservedRunningTime="2026-03-18 13:29:10.143930931 +0000 UTC m=+327.638736726" Mar 18 13:29:11.763067 master-0 kubenswrapper[28504]: I0318 13:29:11.762904 28504 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podfe6cd387-db28-4db0-b933-ba58fcaf8f24"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podfe6cd387-db28-4db0-b933-ba58fcaf8f24] : Timed out while waiting for systemd to remove kubepods-burstable-podfe6cd387_db28_4db0_b933_ba58fcaf8f24.slice" Mar 18 13:29:11.763067 master-0 kubenswrapper[28504]: E0318 13:29:11.763007 28504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable podfe6cd387-db28-4db0-b933-ba58fcaf8f24] : unable to destroy cgroup paths for cgroup [kubepods burstable podfe6cd387-db28-4db0-b933-ba58fcaf8f24] : Timed out while waiting for systemd to remove kubepods-burstable-podfe6cd387_db28_4db0_b933_ba58fcaf8f24.slice" pod="openshift-console/console-86cfd4f585-tfs7z" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" Mar 18 13:29:12.048828 master-0 kubenswrapper[28504]: I0318 13:29:12.048689 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86cfd4f585-tfs7z" Mar 18 13:29:12.133343 master-0 kubenswrapper[28504]: I0318 13:29:12.130363 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-86cfd4f585-tfs7z"] Mar 18 13:29:12.139955 master-0 kubenswrapper[28504]: I0318 13:29:12.139877 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-86cfd4f585-tfs7z"] Mar 18 13:29:12.757626 master-0 kubenswrapper[28504]: I0318 13:29:12.757559 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe6cd387-db28-4db0-b933-ba58fcaf8f24" path="/var/lib/kubelet/pods/fe6cd387-db28-4db0-b933-ba58fcaf8f24/volumes" Mar 18 13:29:17.034180 master-0 kubenswrapper[28504]: I0318 13:29:17.034097 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 13:29:17.035628 master-0 kubenswrapper[28504]: I0318 13:29:17.035590 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:17.038822 master-0 kubenswrapper[28504]: I0318 13:29:17.038759 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-62gbt" Mar 18 13:29:17.039116 master-0 kubenswrapper[28504]: I0318 13:29:17.038829 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 13:29:17.049271 master-0 kubenswrapper[28504]: I0318 13:29:17.049234 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 13:29:17.109356 master-0 kubenswrapper[28504]: I0318 13:29:17.109249 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34b80036-6868-4e0b-9f3a-84c2817e566d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"34b80036-6868-4e0b-9f3a-84c2817e566d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:17.109584 master-0 kubenswrapper[28504]: I0318 13:29:17.109413 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/34b80036-6868-4e0b-9f3a-84c2817e566d-var-lock\") pod \"installer-4-master-0\" (UID: \"34b80036-6868-4e0b-9f3a-84c2817e566d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:17.109641 master-0 kubenswrapper[28504]: I0318 13:29:17.109595 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34b80036-6868-4e0b-9f3a-84c2817e566d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"34b80036-6868-4e0b-9f3a-84c2817e566d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:17.211496 master-0 kubenswrapper[28504]: I0318 13:29:17.211416 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/34b80036-6868-4e0b-9f3a-84c2817e566d-var-lock\") pod \"installer-4-master-0\" (UID: \"34b80036-6868-4e0b-9f3a-84c2817e566d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:17.211761 master-0 kubenswrapper[28504]: I0318 13:29:17.211514 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34b80036-6868-4e0b-9f3a-84c2817e566d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"34b80036-6868-4e0b-9f3a-84c2817e566d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:17.211761 master-0 kubenswrapper[28504]: I0318 13:29:17.211539 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/34b80036-6868-4e0b-9f3a-84c2817e566d-var-lock\") pod \"installer-4-master-0\" (UID: \"34b80036-6868-4e0b-9f3a-84c2817e566d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:17.211761 master-0 kubenswrapper[28504]: I0318 13:29:17.211568 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34b80036-6868-4e0b-9f3a-84c2817e566d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"34b80036-6868-4e0b-9f3a-84c2817e566d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:17.211761 master-0 kubenswrapper[28504]: I0318 13:29:17.211581 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34b80036-6868-4e0b-9f3a-84c2817e566d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"34b80036-6868-4e0b-9f3a-84c2817e566d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:17.230059 master-0 kubenswrapper[28504]: I0318 13:29:17.229069 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34b80036-6868-4e0b-9f3a-84c2817e566d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"34b80036-6868-4e0b-9f3a-84c2817e566d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:17.361545 master-0 kubenswrapper[28504]: I0318 13:29:17.360898 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:17.816720 master-0 kubenswrapper[28504]: I0318 13:29:17.816679 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 13:29:17.821118 master-0 kubenswrapper[28504]: W0318 13:29:17.820695 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod34b80036_6868_4e0b_9f3a_84c2817e566d.slice/crio-c949b8ebf17ee3c4076ea934b539b7ddcbcf66da6b03a07f1d11c334faf155be WatchSource:0}: Error finding container c949b8ebf17ee3c4076ea934b539b7ddcbcf66da6b03a07f1d11c334faf155be: Status 404 returned error can't find the container with id c949b8ebf17ee3c4076ea934b539b7ddcbcf66da6b03a07f1d11c334faf155be Mar 18 13:29:18.094059 master-0 kubenswrapper[28504]: I0318 13:29:18.093913 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"34b80036-6868-4e0b-9f3a-84c2817e566d","Type":"ContainerStarted","Data":"c949b8ebf17ee3c4076ea934b539b7ddcbcf66da6b03a07f1d11c334faf155be"} Mar 18 13:29:18.331226 master-0 kubenswrapper[28504]: I0318 13:29:18.331118 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:18.331804 master-0 kubenswrapper[28504]: I0318 13:29:18.331283 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:18.338163 master-0 kubenswrapper[28504]: I0318 13:29:18.337559 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:19.109802 master-0 kubenswrapper[28504]: I0318 13:29:19.109720 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"34b80036-6868-4e0b-9f3a-84c2817e566d","Type":"ContainerStarted","Data":"1ac2bf0a18485c2d8def66bc41227b8995e207e008874bc2ef9e4f8c95264e9d"} Mar 18 13:29:19.117024 master-0 kubenswrapper[28504]: I0318 13:29:19.116967 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:29:19.135023 master-0 kubenswrapper[28504]: I0318 13:29:19.134846 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=2.134820725 podStartE2EDuration="2.134820725s" podCreationTimestamp="2026-03-18 13:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:29:19.133352563 +0000 UTC m=+336.628158348" watchObservedRunningTime="2026-03-18 13:29:19.134820725 +0000 UTC m=+336.629626510" Mar 18 13:29:19.209872 master-0 kubenswrapper[28504]: I0318 13:29:19.202594 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7486c568bf-jngmz"] Mar 18 13:29:32.968165 master-0 kubenswrapper[28504]: I0318 13:29:32.967642 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:29:33.006136 master-0 kubenswrapper[28504]: I0318 13:29:33.006072 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:29:33.252453 master-0 kubenswrapper[28504]: I0318 13:29:33.252326 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:29:44.243804 master-0 kubenswrapper[28504]: I0318 13:29:44.243647 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7486c568bf-jngmz" podUID="e4022ee9-babb-4dc3-a486-ddbab9fa8c16" containerName="console" containerID="cri-o://74af20829ff8291f0caf615baa5467cf4fd08fe1cab2d2e9f4abb6418bbb6be0" gracePeriod=15 Mar 18 13:29:44.665273 master-0 kubenswrapper[28504]: I0318 13:29:44.665226 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7486c568bf-jngmz_e4022ee9-babb-4dc3-a486-ddbab9fa8c16/console/0.log" Mar 18 13:29:44.665537 master-0 kubenswrapper[28504]: I0318 13:29:44.665324 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:29:44.835260 master-0 kubenswrapper[28504]: I0318 13:29:44.835057 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-oauth-serving-cert\") pod \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " Mar 18 13:29:44.835632 master-0 kubenswrapper[28504]: I0318 13:29:44.835314 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjw6d\" (UniqueName: \"kubernetes.io/projected/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-kube-api-access-cjw6d\") pod \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " Mar 18 13:29:44.835632 master-0 kubenswrapper[28504]: I0318 13:29:44.835424 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-serving-cert\") pod \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " Mar 18 13:29:44.835632 master-0 kubenswrapper[28504]: I0318 13:29:44.835475 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-config\") pod \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " Mar 18 13:29:44.835632 master-0 kubenswrapper[28504]: I0318 13:29:44.835531 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-trusted-ca-bundle\") pod \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " Mar 18 13:29:44.835632 master-0 kubenswrapper[28504]: I0318 13:29:44.835567 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-oauth-config\") pod \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " Mar 18 13:29:44.835632 master-0 kubenswrapper[28504]: I0318 13:29:44.835605 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-service-ca\") pod \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\" (UID: \"e4022ee9-babb-4dc3-a486-ddbab9fa8c16\") " Mar 18 13:29:44.836098 master-0 kubenswrapper[28504]: I0318 13:29:44.835922 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e4022ee9-babb-4dc3-a486-ddbab9fa8c16" (UID: "e4022ee9-babb-4dc3-a486-ddbab9fa8c16"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:29:44.836388 master-0 kubenswrapper[28504]: I0318 13:29:44.836283 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-config" (OuterVolumeSpecName: "console-config") pod "e4022ee9-babb-4dc3-a486-ddbab9fa8c16" (UID: "e4022ee9-babb-4dc3-a486-ddbab9fa8c16"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:29:44.836388 master-0 kubenswrapper[28504]: I0318 13:29:44.836350 28504 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:44.837549 master-0 kubenswrapper[28504]: I0318 13:29:44.837282 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e4022ee9-babb-4dc3-a486-ddbab9fa8c16" (UID: "e4022ee9-babb-4dc3-a486-ddbab9fa8c16"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:29:44.837549 master-0 kubenswrapper[28504]: I0318 13:29:44.837397 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-service-ca" (OuterVolumeSpecName: "service-ca") pod "e4022ee9-babb-4dc3-a486-ddbab9fa8c16" (UID: "e4022ee9-babb-4dc3-a486-ddbab9fa8c16"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:29:44.838514 master-0 kubenswrapper[28504]: I0318 13:29:44.838467 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-kube-api-access-cjw6d" (OuterVolumeSpecName: "kube-api-access-cjw6d") pod "e4022ee9-babb-4dc3-a486-ddbab9fa8c16" (UID: "e4022ee9-babb-4dc3-a486-ddbab9fa8c16"). InnerVolumeSpecName "kube-api-access-cjw6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:29:44.839173 master-0 kubenswrapper[28504]: I0318 13:29:44.839147 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e4022ee9-babb-4dc3-a486-ddbab9fa8c16" (UID: "e4022ee9-babb-4dc3-a486-ddbab9fa8c16"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:29:44.839712 master-0 kubenswrapper[28504]: I0318 13:29:44.839631 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e4022ee9-babb-4dc3-a486-ddbab9fa8c16" (UID: "e4022ee9-babb-4dc3-a486-ddbab9fa8c16"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:29:44.938440 master-0 kubenswrapper[28504]: I0318 13:29:44.938002 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjw6d\" (UniqueName: \"kubernetes.io/projected/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-kube-api-access-cjw6d\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:44.938440 master-0 kubenswrapper[28504]: I0318 13:29:44.938054 28504 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:44.938440 master-0 kubenswrapper[28504]: I0318 13:29:44.938065 28504 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:44.938440 master-0 kubenswrapper[28504]: I0318 13:29:44.938106 28504 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:44.938440 master-0 kubenswrapper[28504]: I0318 13:29:44.938327 28504 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:44.938440 master-0 kubenswrapper[28504]: I0318 13:29:44.938345 28504 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e4022ee9-babb-4dc3-a486-ddbab9fa8c16-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:45.324098 master-0 kubenswrapper[28504]: I0318 13:29:45.323888 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7486c568bf-jngmz_e4022ee9-babb-4dc3-a486-ddbab9fa8c16/console/0.log" Mar 18 13:29:45.324098 master-0 kubenswrapper[28504]: I0318 13:29:45.323981 28504 generic.go:334] "Generic (PLEG): container finished" podID="e4022ee9-babb-4dc3-a486-ddbab9fa8c16" containerID="74af20829ff8291f0caf615baa5467cf4fd08fe1cab2d2e9f4abb6418bbb6be0" exitCode=2 Mar 18 13:29:45.324098 master-0 kubenswrapper[28504]: I0318 13:29:45.324044 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7486c568bf-jngmz" event={"ID":"e4022ee9-babb-4dc3-a486-ddbab9fa8c16","Type":"ContainerDied","Data":"74af20829ff8291f0caf615baa5467cf4fd08fe1cab2d2e9f4abb6418bbb6be0"} Mar 18 13:29:45.324800 master-0 kubenswrapper[28504]: I0318 13:29:45.324118 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7486c568bf-jngmz" event={"ID":"e4022ee9-babb-4dc3-a486-ddbab9fa8c16","Type":"ContainerDied","Data":"3fabef9d0629883821c407cd40b7b792db02f7a31181978179677a6ce6565f15"} Mar 18 13:29:45.324800 master-0 kubenswrapper[28504]: I0318 13:29:45.324152 28504 scope.go:117] "RemoveContainer" containerID="74af20829ff8291f0caf615baa5467cf4fd08fe1cab2d2e9f4abb6418bbb6be0" Mar 18 13:29:45.324800 master-0 kubenswrapper[28504]: I0318 13:29:45.324295 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7486c568bf-jngmz" Mar 18 13:29:45.366427 master-0 kubenswrapper[28504]: I0318 13:29:45.366376 28504 scope.go:117] "RemoveContainer" containerID="74af20829ff8291f0caf615baa5467cf4fd08fe1cab2d2e9f4abb6418bbb6be0" Mar 18 13:29:45.367806 master-0 kubenswrapper[28504]: E0318 13:29:45.367738 28504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74af20829ff8291f0caf615baa5467cf4fd08fe1cab2d2e9f4abb6418bbb6be0\": container with ID starting with 74af20829ff8291f0caf615baa5467cf4fd08fe1cab2d2e9f4abb6418bbb6be0 not found: ID does not exist" containerID="74af20829ff8291f0caf615baa5467cf4fd08fe1cab2d2e9f4abb6418bbb6be0" Mar 18 13:29:45.367994 master-0 kubenswrapper[28504]: I0318 13:29:45.367800 28504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74af20829ff8291f0caf615baa5467cf4fd08fe1cab2d2e9f4abb6418bbb6be0"} err="failed to get container status \"74af20829ff8291f0caf615baa5467cf4fd08fe1cab2d2e9f4abb6418bbb6be0\": rpc error: code = NotFound desc = could not find container \"74af20829ff8291f0caf615baa5467cf4fd08fe1cab2d2e9f4abb6418bbb6be0\": container with ID starting with 74af20829ff8291f0caf615baa5467cf4fd08fe1cab2d2e9f4abb6418bbb6be0 not found: ID does not exist" Mar 18 13:29:45.467962 master-0 kubenswrapper[28504]: I0318 13:29:45.467868 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7486c568bf-jngmz"] Mar 18 13:29:45.530709 master-0 kubenswrapper[28504]: I0318 13:29:45.530604 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7486c568bf-jngmz"] Mar 18 13:29:45.654541 master-0 kubenswrapper[28504]: E0318 13:29:45.654402 28504 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2dpn1smcfbjnb: secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:29:45.654541 master-0 kubenswrapper[28504]: E0318 13:29:45.654494 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle podName:b79758b7-9129-496c-abec-80d455648454 nodeName:}" failed. No retries permitted until 2026-03-18 13:31:47.654472627 +0000 UTC m=+485.149278402 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle") pod "metrics-server-648866dd9c-ztkrd" (UID: "b79758b7-9129-496c-abec-80d455648454") : secret "metrics-server-2dpn1smcfbjnb" not found Mar 18 13:29:46.760531 master-0 kubenswrapper[28504]: I0318 13:29:46.760460 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4022ee9-babb-4dc3-a486-ddbab9fa8c16" path="/var/lib/kubelet/pods/e4022ee9-babb-4dc3-a486-ddbab9fa8c16/volumes" Mar 18 13:29:48.640966 master-0 kubenswrapper[28504]: I0318 13:29:48.636603 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-86544c5fdf-7nkss"] Mar 18 13:29:48.642306 master-0 kubenswrapper[28504]: E0318 13:29:48.642268 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4022ee9-babb-4dc3-a486-ddbab9fa8c16" containerName="console" Mar 18 13:29:48.642419 master-0 kubenswrapper[28504]: I0318 13:29:48.642404 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4022ee9-babb-4dc3-a486-ddbab9fa8c16" containerName="console" Mar 18 13:29:48.642885 master-0 kubenswrapper[28504]: I0318 13:29:48.642860 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4022ee9-babb-4dc3-a486-ddbab9fa8c16" containerName="console" Mar 18 13:29:48.644017 master-0 kubenswrapper[28504]: I0318 13:29:48.643993 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.678313 master-0 kubenswrapper[28504]: I0318 13:29:48.678226 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-86544c5fdf-7nkss"] Mar 18 13:29:48.814217 master-0 kubenswrapper[28504]: I0318 13:29:48.814112 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-trusted-ca-bundle\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.814217 master-0 kubenswrapper[28504]: I0318 13:29:48.814210 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-oauth-serving-cert\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.814489 master-0 kubenswrapper[28504]: I0318 13:29:48.814239 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-config\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.814489 master-0 kubenswrapper[28504]: I0318 13:29:48.814284 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-oauth-config\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.814489 master-0 kubenswrapper[28504]: I0318 13:29:48.814313 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbsnz\" (UniqueName: \"kubernetes.io/projected/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-kube-api-access-lbsnz\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.814489 master-0 kubenswrapper[28504]: I0318 13:29:48.814346 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-serving-cert\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.814489 master-0 kubenswrapper[28504]: I0318 13:29:48.814369 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-service-ca\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.915950 master-0 kubenswrapper[28504]: I0318 13:29:48.915795 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-oauth-config\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.915950 master-0 kubenswrapper[28504]: I0318 13:29:48.915867 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbsnz\" (UniqueName: \"kubernetes.io/projected/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-kube-api-access-lbsnz\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.916309 master-0 kubenswrapper[28504]: I0318 13:29:48.916220 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-serving-cert\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.916365 master-0 kubenswrapper[28504]: I0318 13:29:48.916340 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-service-ca\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.916715 master-0 kubenswrapper[28504]: I0318 13:29:48.916691 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-trusted-ca-bundle\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.916797 master-0 kubenswrapper[28504]: I0318 13:29:48.916775 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-oauth-serving-cert\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.917081 master-0 kubenswrapper[28504]: I0318 13:29:48.917048 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-config\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.917518 master-0 kubenswrapper[28504]: I0318 13:29:48.917490 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-service-ca\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.917859 master-0 kubenswrapper[28504]: I0318 13:29:48.917837 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-oauth-serving-cert\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.917980 master-0 kubenswrapper[28504]: I0318 13:29:48.917911 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-trusted-ca-bundle\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.918277 master-0 kubenswrapper[28504]: I0318 13:29:48.918242 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-config\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.919476 master-0 kubenswrapper[28504]: I0318 13:29:48.919457 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-serving-cert\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.919921 master-0 kubenswrapper[28504]: I0318 13:29:48.919904 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-oauth-config\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.942296 master-0 kubenswrapper[28504]: I0318 13:29:48.942177 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbsnz\" (UniqueName: \"kubernetes.io/projected/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-kube-api-access-lbsnz\") pod \"console-86544c5fdf-7nkss\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:48.970261 master-0 kubenswrapper[28504]: I0318 13:29:48.970177 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:49.382276 master-0 kubenswrapper[28504]: W0318 13:29:49.382203 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb94880f8_cc2a_4724_adaa_d729d2ef9b1d.slice/crio-12697d38e1cd58a09343fac41cff8c082876fcfe1396e1b2bc01c38ebbb9d65b WatchSource:0}: Error finding container 12697d38e1cd58a09343fac41cff8c082876fcfe1396e1b2bc01c38ebbb9d65b: Status 404 returned error can't find the container with id 12697d38e1cd58a09343fac41cff8c082876fcfe1396e1b2bc01c38ebbb9d65b Mar 18 13:29:49.383204 master-0 kubenswrapper[28504]: I0318 13:29:49.383148 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-86544c5fdf-7nkss"] Mar 18 13:29:50.373147 master-0 kubenswrapper[28504]: I0318 13:29:50.373098 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86544c5fdf-7nkss" event={"ID":"b94880f8-cc2a-4724-adaa-d729d2ef9b1d","Type":"ContainerStarted","Data":"10059b05317dd15f63d6e7f2dcacbfb2cc4d517ca8d281c1e406b8dd1c66da13"} Mar 18 13:29:50.373687 master-0 kubenswrapper[28504]: I0318 13:29:50.373159 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86544c5fdf-7nkss" event={"ID":"b94880f8-cc2a-4724-adaa-d729d2ef9b1d","Type":"ContainerStarted","Data":"12697d38e1cd58a09343fac41cff8c082876fcfe1396e1b2bc01c38ebbb9d65b"} Mar 18 13:29:50.399304 master-0 kubenswrapper[28504]: I0318 13:29:50.399187 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-86544c5fdf-7nkss" podStartSLOduration=2.399144742 podStartE2EDuration="2.399144742s" podCreationTimestamp="2026-03-18 13:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:29:50.394274144 +0000 UTC m=+367.889079929" watchObservedRunningTime="2026-03-18 13:29:50.399144742 +0000 UTC m=+367.893950537" Mar 18 13:29:50.845076 master-0 kubenswrapper[28504]: I0318 13:29:50.845024 28504 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:29:50.845345 master-0 kubenswrapper[28504]: I0318 13:29:50.845297 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="cluster-policy-controller" containerID="cri-o://b9ab4da2bf00eddad01601b81bba9f16f6744134ee63b0910cd8e62f9b4a3e0d" gracePeriod=30 Mar 18 13:29:50.845462 master-0 kubenswrapper[28504]: I0318 13:29:50.845390 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager" containerID="cri-o://08b274aeaf9abbd5f8e5365d511a8523a672bf472c4f314741ea06a6ce223aa8" gracePeriod=30 Mar 18 13:29:50.845543 master-0 kubenswrapper[28504]: I0318 13:29:50.845422 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://87b86f2af8e501ae34658be585500655faa626562bf4927f068e08991f40d160" gracePeriod=30 Mar 18 13:29:50.845649 master-0 kubenswrapper[28504]: I0318 13:29:50.845420 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://e140efc28fb74fa94c1d843a6f6a44466dcb4914a6c8eada7179bb0663b14c56" gracePeriod=30 Mar 18 13:29:50.846267 master-0 kubenswrapper[28504]: I0318 13:29:50.846174 28504 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:29:50.847799 master-0 kubenswrapper[28504]: E0318 13:29:50.847762 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager-cert-syncer" Mar 18 13:29:50.847799 master-0 kubenswrapper[28504]: I0318 13:29:50.847791 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager-cert-syncer" Mar 18 13:29:50.847967 master-0 kubenswrapper[28504]: E0318 13:29:50.847840 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager" Mar 18 13:29:50.847967 master-0 kubenswrapper[28504]: I0318 13:29:50.847851 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager" Mar 18 13:29:50.847967 master-0 kubenswrapper[28504]: E0318 13:29:50.847863 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="cluster-policy-controller" Mar 18 13:29:50.847967 master-0 kubenswrapper[28504]: I0318 13:29:50.847872 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="cluster-policy-controller" Mar 18 13:29:50.847967 master-0 kubenswrapper[28504]: E0318 13:29:50.847890 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager" Mar 18 13:29:50.847967 master-0 kubenswrapper[28504]: I0318 13:29:50.847899 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager" Mar 18 13:29:50.847967 master-0 kubenswrapper[28504]: E0318 13:29:50.847950 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager-recovery-controller" Mar 18 13:29:50.847967 master-0 kubenswrapper[28504]: I0318 13:29:50.847962 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager-recovery-controller" Mar 18 13:29:50.848316 master-0 kubenswrapper[28504]: I0318 13:29:50.848154 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="cluster-policy-controller" Mar 18 13:29:50.848316 master-0 kubenswrapper[28504]: I0318 13:29:50.848184 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager-recovery-controller" Mar 18 13:29:50.848316 master-0 kubenswrapper[28504]: I0318 13:29:50.848210 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager-cert-syncer" Mar 18 13:29:50.848316 master-0 kubenswrapper[28504]: I0318 13:29:50.848235 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager" Mar 18 13:29:50.848316 master-0 kubenswrapper[28504]: I0318 13:29:50.848253 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager" Mar 18 13:29:50.848527 master-0 kubenswrapper[28504]: E0318 13:29:50.848475 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager" Mar 18 13:29:50.848527 master-0 kubenswrapper[28504]: I0318 13:29:50.848490 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager" Mar 18 13:29:50.848669 master-0 kubenswrapper[28504]: I0318 13:29:50.848650 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47f97eb0a0cc5aac7e96e57325228c9" containerName="kube-controller-manager" Mar 18 13:29:50.956711 master-0 kubenswrapper[28504]: I0318 13:29:50.956634 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9a1e88afeffbcb0115b3be33556cf14e-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9a1e88afeffbcb0115b3be33556cf14e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:29:50.956711 master-0 kubenswrapper[28504]: I0318 13:29:50.956710 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9a1e88afeffbcb0115b3be33556cf14e-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9a1e88afeffbcb0115b3be33556cf14e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:29:51.023540 master-0 kubenswrapper[28504]: I0318 13:29:51.023475 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e47f97eb0a0cc5aac7e96e57325228c9/kube-controller-manager/1.log" Mar 18 13:29:51.024639 master-0 kubenswrapper[28504]: I0318 13:29:51.024598 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e47f97eb0a0cc5aac7e96e57325228c9/kube-controller-manager-cert-syncer/0.log" Mar 18 13:29:51.025240 master-0 kubenswrapper[28504]: I0318 13:29:51.025214 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:29:51.029904 master-0 kubenswrapper[28504]: I0318 13:29:51.029817 28504 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="e47f97eb0a0cc5aac7e96e57325228c9" podUID="9a1e88afeffbcb0115b3be33556cf14e" Mar 18 13:29:51.059110 master-0 kubenswrapper[28504]: I0318 13:29:51.058991 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9a1e88afeffbcb0115b3be33556cf14e-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9a1e88afeffbcb0115b3be33556cf14e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:29:51.059110 master-0 kubenswrapper[28504]: I0318 13:29:51.059100 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9a1e88afeffbcb0115b3be33556cf14e-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9a1e88afeffbcb0115b3be33556cf14e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:29:51.059414 master-0 kubenswrapper[28504]: I0318 13:29:51.059217 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9a1e88afeffbcb0115b3be33556cf14e-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9a1e88afeffbcb0115b3be33556cf14e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:29:51.059414 master-0 kubenswrapper[28504]: I0318 13:29:51.059274 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9a1e88afeffbcb0115b3be33556cf14e-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9a1e88afeffbcb0115b3be33556cf14e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:29:51.160481 master-0 kubenswrapper[28504]: I0318 13:29:51.160329 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-resource-dir\") pod \"e47f97eb0a0cc5aac7e96e57325228c9\" (UID: \"e47f97eb0a0cc5aac7e96e57325228c9\") " Mar 18 13:29:51.160481 master-0 kubenswrapper[28504]: I0318 13:29:51.160418 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-cert-dir\") pod \"e47f97eb0a0cc5aac7e96e57325228c9\" (UID: \"e47f97eb0a0cc5aac7e96e57325228c9\") " Mar 18 13:29:51.160739 master-0 kubenswrapper[28504]: I0318 13:29:51.160525 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "e47f97eb0a0cc5aac7e96e57325228c9" (UID: "e47f97eb0a0cc5aac7e96e57325228c9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:29:51.160739 master-0 kubenswrapper[28504]: I0318 13:29:51.160568 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "e47f97eb0a0cc5aac7e96e57325228c9" (UID: "e47f97eb0a0cc5aac7e96e57325228c9"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:29:51.160881 master-0 kubenswrapper[28504]: I0318 13:29:51.160856 28504 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:51.161002 master-0 kubenswrapper[28504]: I0318 13:29:51.160882 28504 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e47f97eb0a0cc5aac7e96e57325228c9-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:51.383577 master-0 kubenswrapper[28504]: I0318 13:29:51.383520 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e47f97eb0a0cc5aac7e96e57325228c9/kube-controller-manager/1.log" Mar 18 13:29:51.384599 master-0 kubenswrapper[28504]: I0318 13:29:51.384568 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e47f97eb0a0cc5aac7e96e57325228c9/kube-controller-manager-cert-syncer/0.log" Mar 18 13:29:51.385060 master-0 kubenswrapper[28504]: I0318 13:29:51.385010 28504 generic.go:334] "Generic (PLEG): container finished" podID="e47f97eb0a0cc5aac7e96e57325228c9" containerID="08b274aeaf9abbd5f8e5365d511a8523a672bf472c4f314741ea06a6ce223aa8" exitCode=0 Mar 18 13:29:51.385060 master-0 kubenswrapper[28504]: I0318 13:29:51.385052 28504 generic.go:334] "Generic (PLEG): container finished" podID="e47f97eb0a0cc5aac7e96e57325228c9" containerID="87b86f2af8e501ae34658be585500655faa626562bf4927f068e08991f40d160" exitCode=0 Mar 18 13:29:51.385172 master-0 kubenswrapper[28504]: I0318 13:29:51.385064 28504 generic.go:334] "Generic (PLEG): container finished" podID="e47f97eb0a0cc5aac7e96e57325228c9" containerID="e140efc28fb74fa94c1d843a6f6a44466dcb4914a6c8eada7179bb0663b14c56" exitCode=2 Mar 18 13:29:51.385172 master-0 kubenswrapper[28504]: I0318 13:29:51.385073 28504 scope.go:117] "RemoveContainer" containerID="9f7865da22d2864df6473b0ab5931f19c2c9b3c114b55a2d057d37caa85a26d7" Mar 18 13:29:51.385172 master-0 kubenswrapper[28504]: I0318 13:29:51.385100 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:29:51.385172 master-0 kubenswrapper[28504]: I0318 13:29:51.385076 28504 generic.go:334] "Generic (PLEG): container finished" podID="e47f97eb0a0cc5aac7e96e57325228c9" containerID="b9ab4da2bf00eddad01601b81bba9f16f6744134ee63b0910cd8e62f9b4a3e0d" exitCode=0 Mar 18 13:29:51.385172 master-0 kubenswrapper[28504]: I0318 13:29:51.385172 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37139cfc3c201b83f82c4c778e201e9e4fa5f476ed738dc1d77b51b256fa3f72" Mar 18 13:29:51.389392 master-0 kubenswrapper[28504]: I0318 13:29:51.389343 28504 generic.go:334] "Generic (PLEG): container finished" podID="34b80036-6868-4e0b-9f3a-84c2817e566d" containerID="1ac2bf0a18485c2d8def66bc41227b8995e207e008874bc2ef9e4f8c95264e9d" exitCode=0 Mar 18 13:29:51.389501 master-0 kubenswrapper[28504]: I0318 13:29:51.389391 28504 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="e47f97eb0a0cc5aac7e96e57325228c9" podUID="9a1e88afeffbcb0115b3be33556cf14e" Mar 18 13:29:51.389565 master-0 kubenswrapper[28504]: I0318 13:29:51.389453 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"34b80036-6868-4e0b-9f3a-84c2817e566d","Type":"ContainerDied","Data":"1ac2bf0a18485c2d8def66bc41227b8995e207e008874bc2ef9e4f8c95264e9d"} Mar 18 13:29:51.434829 master-0 kubenswrapper[28504]: I0318 13:29:51.434773 28504 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="e47f97eb0a0cc5aac7e96e57325228c9" podUID="9a1e88afeffbcb0115b3be33556cf14e" Mar 18 13:29:52.400002 master-0 kubenswrapper[28504]: I0318 13:29:52.399907 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e47f97eb0a0cc5aac7e96e57325228c9/kube-controller-manager-cert-syncer/0.log" Mar 18 13:29:52.684366 master-0 kubenswrapper[28504]: I0318 13:29:52.684242 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:52.764890 master-0 kubenswrapper[28504]: I0318 13:29:52.764817 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e47f97eb0a0cc5aac7e96e57325228c9" path="/var/lib/kubelet/pods/e47f97eb0a0cc5aac7e96e57325228c9/volumes" Mar 18 13:29:52.789836 master-0 kubenswrapper[28504]: I0318 13:29:52.789507 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34b80036-6868-4e0b-9f3a-84c2817e566d-kube-api-access\") pod \"34b80036-6868-4e0b-9f3a-84c2817e566d\" (UID: \"34b80036-6868-4e0b-9f3a-84c2817e566d\") " Mar 18 13:29:52.789836 master-0 kubenswrapper[28504]: I0318 13:29:52.789678 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/34b80036-6868-4e0b-9f3a-84c2817e566d-var-lock\") pod \"34b80036-6868-4e0b-9f3a-84c2817e566d\" (UID: \"34b80036-6868-4e0b-9f3a-84c2817e566d\") " Mar 18 13:29:52.789836 master-0 kubenswrapper[28504]: I0318 13:29:52.789781 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34b80036-6868-4e0b-9f3a-84c2817e566d-kubelet-dir\") pod \"34b80036-6868-4e0b-9f3a-84c2817e566d\" (UID: \"34b80036-6868-4e0b-9f3a-84c2817e566d\") " Mar 18 13:29:52.790220 master-0 kubenswrapper[28504]: I0318 13:29:52.790063 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34b80036-6868-4e0b-9f3a-84c2817e566d-var-lock" (OuterVolumeSpecName: "var-lock") pod "34b80036-6868-4e0b-9f3a-84c2817e566d" (UID: "34b80036-6868-4e0b-9f3a-84c2817e566d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:29:52.790579 master-0 kubenswrapper[28504]: I0318 13:29:52.790483 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34b80036-6868-4e0b-9f3a-84c2817e566d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "34b80036-6868-4e0b-9f3a-84c2817e566d" (UID: "34b80036-6868-4e0b-9f3a-84c2817e566d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:29:52.790632 master-0 kubenswrapper[28504]: I0318 13:29:52.790513 28504 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/34b80036-6868-4e0b-9f3a-84c2817e566d-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:52.793599 master-0 kubenswrapper[28504]: I0318 13:29:52.793476 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b80036-6868-4e0b-9f3a-84c2817e566d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "34b80036-6868-4e0b-9f3a-84c2817e566d" (UID: "34b80036-6868-4e0b-9f3a-84c2817e566d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:29:52.891713 master-0 kubenswrapper[28504]: I0318 13:29:52.891619 28504 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34b80036-6868-4e0b-9f3a-84c2817e566d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:52.891713 master-0 kubenswrapper[28504]: I0318 13:29:52.891700 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34b80036-6868-4e0b-9f3a-84c2817e566d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:53.409327 master-0 kubenswrapper[28504]: I0318 13:29:53.409221 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"34b80036-6868-4e0b-9f3a-84c2817e566d","Type":"ContainerDied","Data":"c949b8ebf17ee3c4076ea934b539b7ddcbcf66da6b03a07f1d11c334faf155be"} Mar 18 13:29:53.409327 master-0 kubenswrapper[28504]: I0318 13:29:53.409311 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c949b8ebf17ee3c4076ea934b539b7ddcbcf66da6b03a07f1d11c334faf155be" Mar 18 13:29:53.410039 master-0 kubenswrapper[28504]: I0318 13:29:53.409326 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:58.970821 master-0 kubenswrapper[28504]: I0318 13:29:58.970746 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:58.973588 master-0 kubenswrapper[28504]: I0318 13:29:58.971870 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:58.976717 master-0 kubenswrapper[28504]: I0318 13:29:58.976652 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:29:59.455392 master-0 kubenswrapper[28504]: I0318 13:29:59.455322 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:30:03.749209 master-0 kubenswrapper[28504]: I0318 13:30:03.749140 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:03.768899 master-0 kubenswrapper[28504]: I0318 13:30:03.768842 28504 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="56dd0516-5588-4d88-b4ab-ebf29090a437" Mar 18 13:30:03.768899 master-0 kubenswrapper[28504]: I0318 13:30:03.768887 28504 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="56dd0516-5588-4d88-b4ab-ebf29090a437" Mar 18 13:30:03.785215 master-0 kubenswrapper[28504]: I0318 13:30:03.784731 28504 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:03.790514 master-0 kubenswrapper[28504]: I0318 13:30:03.790443 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:30:03.796004 master-0 kubenswrapper[28504]: I0318 13:30:03.795956 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:30:03.798032 master-0 kubenswrapper[28504]: I0318 13:30:03.797992 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:03.810057 master-0 kubenswrapper[28504]: I0318 13:30:03.808235 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:30:03.822321 master-0 kubenswrapper[28504]: W0318 13:30:03.822220 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a1e88afeffbcb0115b3be33556cf14e.slice/crio-26bd1b2c775c71363397ca9a671188ff14bd79e38852a4087b8529b5044a1e00 WatchSource:0}: Error finding container 26bd1b2c775c71363397ca9a671188ff14bd79e38852a4087b8529b5044a1e00: Status 404 returned error can't find the container with id 26bd1b2c775c71363397ca9a671188ff14bd79e38852a4087b8529b5044a1e00 Mar 18 13:30:04.489690 master-0 kubenswrapper[28504]: I0318 13:30:04.489639 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9a1e88afeffbcb0115b3be33556cf14e","Type":"ContainerStarted","Data":"b97b59cdd76d3006c4edfa58a3c7ff56460eaab464cefa70ec0a13167928ea64"} Mar 18 13:30:04.489853 master-0 kubenswrapper[28504]: I0318 13:30:04.489699 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9a1e88afeffbcb0115b3be33556cf14e","Type":"ContainerStarted","Data":"b59c2c83c0a36798a9346368ac9ddb7423c3ef3867ea81ca0c7750e161c3dba7"} Mar 18 13:30:04.489853 master-0 kubenswrapper[28504]: I0318 13:30:04.489710 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9a1e88afeffbcb0115b3be33556cf14e","Type":"ContainerStarted","Data":"26bd1b2c775c71363397ca9a671188ff14bd79e38852a4087b8529b5044a1e00"} Mar 18 13:30:05.501264 master-0 kubenswrapper[28504]: I0318 13:30:05.501142 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9a1e88afeffbcb0115b3be33556cf14e","Type":"ContainerStarted","Data":"40c653c5d423b15170e9908482157a160f1cb8ed6cb23877eea7f4da25c3bf5f"} Mar 18 13:30:05.501264 master-0 kubenswrapper[28504]: I0318 13:30:05.501214 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9a1e88afeffbcb0115b3be33556cf14e","Type":"ContainerStarted","Data":"38f9e9bff39bf401b816bd40252732a9cf1bff7346baa33d9f4bf58608b7d003"} Mar 18 13:30:05.529699 master-0 kubenswrapper[28504]: I0318 13:30:05.529604 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.529580898 podStartE2EDuration="2.529580898s" podCreationTimestamp="2026-03-18 13:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:30:05.523723491 +0000 UTC m=+383.018529266" watchObservedRunningTime="2026-03-18 13:30:05.529580898 +0000 UTC m=+383.024386673" Mar 18 13:30:05.679795 master-0 kubenswrapper[28504]: I0318 13:30:05.679721 28504 scope.go:117] "RemoveContainer" containerID="b9ab4da2bf00eddad01601b81bba9f16f6744134ee63b0910cd8e62f9b4a3e0d" Mar 18 13:30:05.694684 master-0 kubenswrapper[28504]: I0318 13:30:05.694626 28504 scope.go:117] "RemoveContainer" containerID="87b86f2af8e501ae34658be585500655faa626562bf4927f068e08991f40d160" Mar 18 13:30:05.709712 master-0 kubenswrapper[28504]: I0318 13:30:05.709663 28504 scope.go:117] "RemoveContainer" containerID="e140efc28fb74fa94c1d843a6f6a44466dcb4914a6c8eada7179bb0663b14c56" Mar 18 13:30:13.799264 master-0 kubenswrapper[28504]: I0318 13:30:13.799167 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:13.799264 master-0 kubenswrapper[28504]: I0318 13:30:13.799278 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:13.800005 master-0 kubenswrapper[28504]: I0318 13:30:13.799293 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:13.800005 master-0 kubenswrapper[28504]: I0318 13:30:13.799307 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:13.803532 master-0 kubenswrapper[28504]: I0318 13:30:13.803443 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:13.804186 master-0 kubenswrapper[28504]: I0318 13:30:13.804126 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:14.581080 master-0 kubenswrapper[28504]: I0318 13:30:14.581003 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:14.581303 master-0 kubenswrapper[28504]: I0318 13:30:14.581192 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:22.780880 master-0 kubenswrapper[28504]: I0318 13:30:22.779883 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57ccc4885-h97bt"] Mar 18 13:30:23.648836 master-0 kubenswrapper[28504]: I0318 13:30:23.648744 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7bb78b6b94-7nxcq"] Mar 18 13:30:23.649422 master-0 kubenswrapper[28504]: E0318 13:30:23.649340 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b80036-6868-4e0b-9f3a-84c2817e566d" containerName="installer" Mar 18 13:30:23.649422 master-0 kubenswrapper[28504]: I0318 13:30:23.649368 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b80036-6868-4e0b-9f3a-84c2817e566d" containerName="installer" Mar 18 13:30:23.650951 master-0 kubenswrapper[28504]: I0318 13:30:23.649605 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b80036-6868-4e0b-9f3a-84c2817e566d" containerName="installer" Mar 18 13:30:23.650951 master-0 kubenswrapper[28504]: I0318 13:30:23.650505 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.670858 master-0 kubenswrapper[28504]: I0318 13:30:23.670793 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bb78b6b94-7nxcq"] Mar 18 13:30:23.796954 master-0 kubenswrapper[28504]: I0318 13:30:23.796869 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-service-ca\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.797552 master-0 kubenswrapper[28504]: I0318 13:30:23.797022 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7cf7\" (UniqueName: \"kubernetes.io/projected/e007d827-7949-4726-a68f-53cbb78268f9-kube-api-access-c7cf7\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.797552 master-0 kubenswrapper[28504]: I0318 13:30:23.797387 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e007d827-7949-4726-a68f-53cbb78268f9-console-serving-cert\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.797552 master-0 kubenswrapper[28504]: I0318 13:30:23.797458 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-console-config\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.797552 master-0 kubenswrapper[28504]: I0318 13:30:23.797478 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-trusted-ca-bundle\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.797552 master-0 kubenswrapper[28504]: I0318 13:30:23.797495 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-oauth-serving-cert\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.797552 master-0 kubenswrapper[28504]: I0318 13:30:23.797529 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e007d827-7949-4726-a68f-53cbb78268f9-console-oauth-config\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.899000 master-0 kubenswrapper[28504]: I0318 13:30:23.898818 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-service-ca\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.899287 master-0 kubenswrapper[28504]: I0318 13:30:23.899042 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7cf7\" (UniqueName: \"kubernetes.io/projected/e007d827-7949-4726-a68f-53cbb78268f9-kube-api-access-c7cf7\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.899287 master-0 kubenswrapper[28504]: I0318 13:30:23.899100 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e007d827-7949-4726-a68f-53cbb78268f9-console-serving-cert\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.899287 master-0 kubenswrapper[28504]: I0318 13:30:23.899144 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-console-config\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.899287 master-0 kubenswrapper[28504]: I0318 13:30:23.899168 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-trusted-ca-bundle\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.899287 master-0 kubenswrapper[28504]: I0318 13:30:23.899263 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-oauth-serving-cert\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.899657 master-0 kubenswrapper[28504]: I0318 13:30:23.899342 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e007d827-7949-4726-a68f-53cbb78268f9-console-oauth-config\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.900514 master-0 kubenswrapper[28504]: I0318 13:30:23.900469 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-service-ca\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.900631 master-0 kubenswrapper[28504]: I0318 13:30:23.900601 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-console-config\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.900752 master-0 kubenswrapper[28504]: I0318 13:30:23.900722 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-trusted-ca-bundle\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.901614 master-0 kubenswrapper[28504]: I0318 13:30:23.901558 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-oauth-serving-cert\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.904783 master-0 kubenswrapper[28504]: I0318 13:30:23.904437 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e007d827-7949-4726-a68f-53cbb78268f9-console-serving-cert\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.906891 master-0 kubenswrapper[28504]: I0318 13:30:23.906835 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e007d827-7949-4726-a68f-53cbb78268f9-console-oauth-config\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.929530 master-0 kubenswrapper[28504]: I0318 13:30:23.929472 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7cf7\" (UniqueName: \"kubernetes.io/projected/e007d827-7949-4726-a68f-53cbb78268f9-kube-api-access-c7cf7\") pod \"console-7bb78b6b94-7nxcq\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:23.988675 master-0 kubenswrapper[28504]: I0318 13:30:23.988608 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:24.447426 master-0 kubenswrapper[28504]: I0318 13:30:24.447319 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bb78b6b94-7nxcq"] Mar 18 13:30:24.658588 master-0 kubenswrapper[28504]: I0318 13:30:24.658482 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb78b6b94-7nxcq" event={"ID":"e007d827-7949-4726-a68f-53cbb78268f9","Type":"ContainerStarted","Data":"c12044514fbab207bd5acc8f447efb8af77358d7071a8006627b064412b4597a"} Mar 18 13:30:24.658588 master-0 kubenswrapper[28504]: I0318 13:30:24.658576 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb78b6b94-7nxcq" event={"ID":"e007d827-7949-4726-a68f-53cbb78268f9","Type":"ContainerStarted","Data":"671d2de22f61e4fce6ac7029f8005fb417033e767cfbcafb13b50eacaa0e186e"} Mar 18 13:30:24.686374 master-0 kubenswrapper[28504]: I0318 13:30:24.686257 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7bb78b6b94-7nxcq" podStartSLOduration=1.6862336519999999 podStartE2EDuration="1.686233652s" podCreationTimestamp="2026-03-18 13:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:30:24.684383669 +0000 UTC m=+402.179189464" watchObservedRunningTime="2026-03-18 13:30:24.686233652 +0000 UTC m=+402.181039427" Mar 18 13:30:33.990040 master-0 kubenswrapper[28504]: I0318 13:30:33.989923 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:33.990715 master-0 kubenswrapper[28504]: I0318 13:30:33.990056 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:33.995322 master-0 kubenswrapper[28504]: I0318 13:30:33.995097 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:34.767157 master-0 kubenswrapper[28504]: I0318 13:30:34.767062 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:30:34.861965 master-0 kubenswrapper[28504]: I0318 13:30:34.859061 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-86544c5fdf-7nkss"] Mar 18 13:30:44.201654 master-0 kubenswrapper[28504]: I0318 13:30:44.201581 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:30:44.367438 master-0 kubenswrapper[28504]: I0318 13:30:44.367356 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpdw6\" (UniqueName: \"kubernetes.io/projected/b79758b7-9129-496c-abec-80d455648454-kube-api-access-lpdw6\") pod \"b79758b7-9129-496c-abec-80d455648454\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " Mar 18 13:30:44.367683 master-0 kubenswrapper[28504]: I0318 13:30:44.367485 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-server-tls\") pod \"b79758b7-9129-496c-abec-80d455648454\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " Mar 18 13:30:44.367683 master-0 kubenswrapper[28504]: I0318 13:30:44.367533 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle\") pod \"b79758b7-9129-496c-abec-80d455648454\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " Mar 18 13:30:44.367683 master-0 kubenswrapper[28504]: I0318 13:30:44.367600 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-configmap-kubelet-serving-ca-bundle\") pod \"b79758b7-9129-496c-abec-80d455648454\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " Mar 18 13:30:44.367683 master-0 kubenswrapper[28504]: I0318 13:30:44.367678 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b79758b7-9129-496c-abec-80d455648454-audit-log\") pod \"b79758b7-9129-496c-abec-80d455648454\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " Mar 18 13:30:44.368107 master-0 kubenswrapper[28504]: I0318 13:30:44.368009 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-metrics-server-audit-profiles\") pod \"b79758b7-9129-496c-abec-80d455648454\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " Mar 18 13:30:44.368227 master-0 kubenswrapper[28504]: I0318 13:30:44.368183 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-client-certs\") pod \"b79758b7-9129-496c-abec-80d455648454\" (UID: \"b79758b7-9129-496c-abec-80d455648454\") " Mar 18 13:30:44.368268 master-0 kubenswrapper[28504]: I0318 13:30:44.368243 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b79758b7-9129-496c-abec-80d455648454-audit-log" (OuterVolumeSpecName: "audit-log") pod "b79758b7-9129-496c-abec-80d455648454" (UID: "b79758b7-9129-496c-abec-80d455648454"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:30:44.368436 master-0 kubenswrapper[28504]: I0318 13:30:44.368392 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "b79758b7-9129-496c-abec-80d455648454" (UID: "b79758b7-9129-496c-abec-80d455648454"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:30:44.368998 master-0 kubenswrapper[28504]: I0318 13:30:44.368961 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "b79758b7-9129-496c-abec-80d455648454" (UID: "b79758b7-9129-496c-abec-80d455648454"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:30:44.369427 master-0 kubenswrapper[28504]: I0318 13:30:44.369398 28504 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:44.369427 master-0 kubenswrapper[28504]: I0318 13:30:44.369422 28504 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b79758b7-9129-496c-abec-80d455648454-audit-log\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:44.369498 master-0 kubenswrapper[28504]: I0318 13:30:44.369437 28504 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b79758b7-9129-496c-abec-80d455648454-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:44.371862 master-0 kubenswrapper[28504]: I0318 13:30:44.371779 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "b79758b7-9129-496c-abec-80d455648454" (UID: "b79758b7-9129-496c-abec-80d455648454"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:30:44.372529 master-0 kubenswrapper[28504]: I0318 13:30:44.372454 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "b79758b7-9129-496c-abec-80d455648454" (UID: "b79758b7-9129-496c-abec-80d455648454"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:30:44.372529 master-0 kubenswrapper[28504]: I0318 13:30:44.372493 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "b79758b7-9129-496c-abec-80d455648454" (UID: "b79758b7-9129-496c-abec-80d455648454"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:30:44.373122 master-0 kubenswrapper[28504]: I0318 13:30:44.373025 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b79758b7-9129-496c-abec-80d455648454-kube-api-access-lpdw6" (OuterVolumeSpecName: "kube-api-access-lpdw6") pod "b79758b7-9129-496c-abec-80d455648454" (UID: "b79758b7-9129-496c-abec-80d455648454"). InnerVolumeSpecName "kube-api-access-lpdw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:30:44.622637 master-0 kubenswrapper[28504]: I0318 13:30:44.622577 28504 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:44.622637 master-0 kubenswrapper[28504]: I0318 13:30:44.622618 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpdw6\" (UniqueName: \"kubernetes.io/projected/b79758b7-9129-496c-abec-80d455648454-kube-api-access-lpdw6\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:44.622637 master-0 kubenswrapper[28504]: I0318 13:30:44.622632 28504 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:44.622637 master-0 kubenswrapper[28504]: I0318 13:30:44.622646 28504 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79758b7-9129-496c-abec-80d455648454-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:44.813173 master-0 kubenswrapper[28504]: I0318 13:30:44.813113 28504 generic.go:334] "Generic (PLEG): container finished" podID="b79758b7-9129-496c-abec-80d455648454" containerID="6e8ae0a076c41e3f18b14641ea28ac3afecdeb82b78e2928a3c0fb7c0dd943cb" exitCode=0 Mar 18 13:30:44.813173 master-0 kubenswrapper[28504]: I0318 13:30:44.813161 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" event={"ID":"b79758b7-9129-496c-abec-80d455648454","Type":"ContainerDied","Data":"6e8ae0a076c41e3f18b14641ea28ac3afecdeb82b78e2928a3c0fb7c0dd943cb"} Mar 18 13:30:44.813173 master-0 kubenswrapper[28504]: I0318 13:30:44.813191 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" event={"ID":"b79758b7-9129-496c-abec-80d455648454","Type":"ContainerDied","Data":"f8cc997e3f27ce3fc910341ff80d8b564acb4ef4acb174e7ab70b72471e906fc"} Mar 18 13:30:44.813173 master-0 kubenswrapper[28504]: I0318 13:30:44.813209 28504 scope.go:117] "RemoveContainer" containerID="6e8ae0a076c41e3f18b14641ea28ac3afecdeb82b78e2928a3c0fb7c0dd943cb" Mar 18 13:30:44.813601 master-0 kubenswrapper[28504]: I0318 13:30:44.813306 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-648866dd9c-ztkrd" Mar 18 13:30:44.830353 master-0 kubenswrapper[28504]: I0318 13:30:44.830296 28504 scope.go:117] "RemoveContainer" containerID="6e8ae0a076c41e3f18b14641ea28ac3afecdeb82b78e2928a3c0fb7c0dd943cb" Mar 18 13:30:44.830914 master-0 kubenswrapper[28504]: E0318 13:30:44.830822 28504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e8ae0a076c41e3f18b14641ea28ac3afecdeb82b78e2928a3c0fb7c0dd943cb\": container with ID starting with 6e8ae0a076c41e3f18b14641ea28ac3afecdeb82b78e2928a3c0fb7c0dd943cb not found: ID does not exist" containerID="6e8ae0a076c41e3f18b14641ea28ac3afecdeb82b78e2928a3c0fb7c0dd943cb" Mar 18 13:30:44.831015 master-0 kubenswrapper[28504]: I0318 13:30:44.830894 28504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e8ae0a076c41e3f18b14641ea28ac3afecdeb82b78e2928a3c0fb7c0dd943cb"} err="failed to get container status \"6e8ae0a076c41e3f18b14641ea28ac3afecdeb82b78e2928a3c0fb7c0dd943cb\": rpc error: code = NotFound desc = could not find container \"6e8ae0a076c41e3f18b14641ea28ac3afecdeb82b78e2928a3c0fb7c0dd943cb\": container with ID starting with 6e8ae0a076c41e3f18b14641ea28ac3afecdeb82b78e2928a3c0fb7c0dd943cb not found: ID does not exist" Mar 18 13:30:44.868146 master-0 kubenswrapper[28504]: I0318 13:30:44.868083 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-648866dd9c-ztkrd"] Mar 18 13:30:44.871374 master-0 kubenswrapper[28504]: I0318 13:30:44.871312 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-648866dd9c-ztkrd"] Mar 18 13:30:46.758965 master-0 kubenswrapper[28504]: I0318 13:30:46.758840 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79758b7-9129-496c-abec-80d455648454" path="/var/lib/kubelet/pods/b79758b7-9129-496c-abec-80d455648454/volumes" Mar 18 13:30:47.841367 master-0 kubenswrapper[28504]: I0318 13:30:47.841233 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-57ccc4885-h97bt" podUID="83a66ab9-3aee-4035-92ba-2be81be6c4fd" containerName="console" containerID="cri-o://fd03aa61f6a4f5da369c8b6ae2c794fe10b72ff6e16beb962eb6cf6df9269fd2" gracePeriod=15 Mar 18 13:30:48.248437 master-0 kubenswrapper[28504]: I0318 13:30:48.248360 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57ccc4885-h97bt_83a66ab9-3aee-4035-92ba-2be81be6c4fd/console/0.log" Mar 18 13:30:48.248437 master-0 kubenswrapper[28504]: I0318 13:30:48.248465 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:30:48.371021 master-0 kubenswrapper[28504]: I0318 13:30:48.370417 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-serving-cert\") pod \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " Mar 18 13:30:48.371263 master-0 kubenswrapper[28504]: I0318 13:30:48.371160 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-trusted-ca-bundle\") pod \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " Mar 18 13:30:48.371395 master-0 kubenswrapper[28504]: I0318 13:30:48.371364 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-oauth-config\") pod \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " Mar 18 13:30:48.371533 master-0 kubenswrapper[28504]: I0318 13:30:48.371503 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-oauth-serving-cert\") pod \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " Mar 18 13:30:48.371825 master-0 kubenswrapper[28504]: I0318 13:30:48.371782 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "83a66ab9-3aee-4035-92ba-2be81be6c4fd" (UID: "83a66ab9-3aee-4035-92ba-2be81be6c4fd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:30:48.372014 master-0 kubenswrapper[28504]: I0318 13:30:48.371973 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "83a66ab9-3aee-4035-92ba-2be81be6c4fd" (UID: "83a66ab9-3aee-4035-92ba-2be81be6c4fd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:30:48.372107 master-0 kubenswrapper[28504]: I0318 13:30:48.372086 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfpzt\" (UniqueName: \"kubernetes.io/projected/83a66ab9-3aee-4035-92ba-2be81be6c4fd-kube-api-access-xfpzt\") pod \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " Mar 18 13:30:48.372574 master-0 kubenswrapper[28504]: I0318 13:30:48.372529 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-service-ca\") pod \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " Mar 18 13:30:48.372574 master-0 kubenswrapper[28504]: I0318 13:30:48.372566 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-config\") pod \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\" (UID: \"83a66ab9-3aee-4035-92ba-2be81be6c4fd\") " Mar 18 13:30:48.373002 master-0 kubenswrapper[28504]: I0318 13:30:48.372964 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-config" (OuterVolumeSpecName: "console-config") pod "83a66ab9-3aee-4035-92ba-2be81be6c4fd" (UID: "83a66ab9-3aee-4035-92ba-2be81be6c4fd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:30:48.373212 master-0 kubenswrapper[28504]: I0318 13:30:48.373147 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "83a66ab9-3aee-4035-92ba-2be81be6c4fd" (UID: "83a66ab9-3aee-4035-92ba-2be81be6c4fd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:30:48.373397 master-0 kubenswrapper[28504]: I0318 13:30:48.373338 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-service-ca" (OuterVolumeSpecName: "service-ca") pod "83a66ab9-3aee-4035-92ba-2be81be6c4fd" (UID: "83a66ab9-3aee-4035-92ba-2be81be6c4fd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:30:48.373564 master-0 kubenswrapper[28504]: I0318 13:30:48.373528 28504 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:48.373564 master-0 kubenswrapper[28504]: I0318 13:30:48.373558 28504 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:48.373636 master-0 kubenswrapper[28504]: I0318 13:30:48.373572 28504 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:48.373636 master-0 kubenswrapper[28504]: I0318 13:30:48.373585 28504 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:48.373636 master-0 kubenswrapper[28504]: I0318 13:30:48.373597 28504 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83a66ab9-3aee-4035-92ba-2be81be6c4fd-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:48.374663 master-0 kubenswrapper[28504]: I0318 13:30:48.374632 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "83a66ab9-3aee-4035-92ba-2be81be6c4fd" (UID: "83a66ab9-3aee-4035-92ba-2be81be6c4fd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:30:48.375041 master-0 kubenswrapper[28504]: I0318 13:30:48.375005 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83a66ab9-3aee-4035-92ba-2be81be6c4fd-kube-api-access-xfpzt" (OuterVolumeSpecName: "kube-api-access-xfpzt") pod "83a66ab9-3aee-4035-92ba-2be81be6c4fd" (UID: "83a66ab9-3aee-4035-92ba-2be81be6c4fd"). InnerVolumeSpecName "kube-api-access-xfpzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:30:48.475044 master-0 kubenswrapper[28504]: I0318 13:30:48.474892 28504 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83a66ab9-3aee-4035-92ba-2be81be6c4fd-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:48.475044 master-0 kubenswrapper[28504]: I0318 13:30:48.474952 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfpzt\" (UniqueName: \"kubernetes.io/projected/83a66ab9-3aee-4035-92ba-2be81be6c4fd-kube-api-access-xfpzt\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:48.850435 master-0 kubenswrapper[28504]: I0318 13:30:48.850379 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57ccc4885-h97bt_83a66ab9-3aee-4035-92ba-2be81be6c4fd/console/0.log" Mar 18 13:30:48.851036 master-0 kubenswrapper[28504]: I0318 13:30:48.850442 28504 generic.go:334] "Generic (PLEG): container finished" podID="83a66ab9-3aee-4035-92ba-2be81be6c4fd" containerID="fd03aa61f6a4f5da369c8b6ae2c794fe10b72ff6e16beb962eb6cf6df9269fd2" exitCode=2 Mar 18 13:30:48.851036 master-0 kubenswrapper[28504]: I0318 13:30:48.850474 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57ccc4885-h97bt" event={"ID":"83a66ab9-3aee-4035-92ba-2be81be6c4fd","Type":"ContainerDied","Data":"fd03aa61f6a4f5da369c8b6ae2c794fe10b72ff6e16beb962eb6cf6df9269fd2"} Mar 18 13:30:48.851036 master-0 kubenswrapper[28504]: I0318 13:30:48.850510 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57ccc4885-h97bt" event={"ID":"83a66ab9-3aee-4035-92ba-2be81be6c4fd","Type":"ContainerDied","Data":"0bb1204ea76896fab40affb21f68cdc0af2d710eb112bdad25ca8642c9bfa363"} Mar 18 13:30:48.851036 master-0 kubenswrapper[28504]: I0318 13:30:48.850534 28504 scope.go:117] "RemoveContainer" containerID="fd03aa61f6a4f5da369c8b6ae2c794fe10b72ff6e16beb962eb6cf6df9269fd2" Mar 18 13:30:48.851036 master-0 kubenswrapper[28504]: I0318 13:30:48.850535 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57ccc4885-h97bt" Mar 18 13:30:48.867486 master-0 kubenswrapper[28504]: I0318 13:30:48.867438 28504 scope.go:117] "RemoveContainer" containerID="fd03aa61f6a4f5da369c8b6ae2c794fe10b72ff6e16beb962eb6cf6df9269fd2" Mar 18 13:30:48.867862 master-0 kubenswrapper[28504]: E0318 13:30:48.867835 28504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd03aa61f6a4f5da369c8b6ae2c794fe10b72ff6e16beb962eb6cf6df9269fd2\": container with ID starting with fd03aa61f6a4f5da369c8b6ae2c794fe10b72ff6e16beb962eb6cf6df9269fd2 not found: ID does not exist" containerID="fd03aa61f6a4f5da369c8b6ae2c794fe10b72ff6e16beb962eb6cf6df9269fd2" Mar 18 13:30:48.867931 master-0 kubenswrapper[28504]: I0318 13:30:48.867869 28504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd03aa61f6a4f5da369c8b6ae2c794fe10b72ff6e16beb962eb6cf6df9269fd2"} err="failed to get container status \"fd03aa61f6a4f5da369c8b6ae2c794fe10b72ff6e16beb962eb6cf6df9269fd2\": rpc error: code = NotFound desc = could not find container \"fd03aa61f6a4f5da369c8b6ae2c794fe10b72ff6e16beb962eb6cf6df9269fd2\": container with ID starting with fd03aa61f6a4f5da369c8b6ae2c794fe10b72ff6e16beb962eb6cf6df9269fd2 not found: ID does not exist" Mar 18 13:30:48.888595 master-0 kubenswrapper[28504]: I0318 13:30:48.888492 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57ccc4885-h97bt"] Mar 18 13:30:48.895916 master-0 kubenswrapper[28504]: I0318 13:30:48.895858 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-57ccc4885-h97bt"] Mar 18 13:30:50.760410 master-0 kubenswrapper[28504]: I0318 13:30:50.760315 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83a66ab9-3aee-4035-92ba-2be81be6c4fd" path="/var/lib/kubelet/pods/83a66ab9-3aee-4035-92ba-2be81be6c4fd/volumes" Mar 18 13:30:58.580116 master-0 kubenswrapper[28504]: I0318 13:30:58.580038 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql"] Mar 18 13:30:58.580667 master-0 kubenswrapper[28504]: E0318 13:30:58.580428 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b79758b7-9129-496c-abec-80d455648454" containerName="metrics-server" Mar 18 13:30:58.580667 master-0 kubenswrapper[28504]: I0318 13:30:58.580446 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="b79758b7-9129-496c-abec-80d455648454" containerName="metrics-server" Mar 18 13:30:58.580667 master-0 kubenswrapper[28504]: E0318 13:30:58.580468 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83a66ab9-3aee-4035-92ba-2be81be6c4fd" containerName="console" Mar 18 13:30:58.580667 master-0 kubenswrapper[28504]: I0318 13:30:58.580476 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="83a66ab9-3aee-4035-92ba-2be81be6c4fd" containerName="console" Mar 18 13:30:58.580802 master-0 kubenswrapper[28504]: I0318 13:30:58.580733 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="b79758b7-9129-496c-abec-80d455648454" containerName="metrics-server" Mar 18 13:30:58.580802 master-0 kubenswrapper[28504]: I0318 13:30:58.580760 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="83a66ab9-3aee-4035-92ba-2be81be6c4fd" containerName="console" Mar 18 13:30:58.581811 master-0 kubenswrapper[28504]: I0318 13:30:58.581772 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" Mar 18 13:30:58.584215 master-0 kubenswrapper[28504]: I0318 13:30:58.584174 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-whvd2" Mar 18 13:30:58.612708 master-0 kubenswrapper[28504]: I0318 13:30:58.612624 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql"] Mar 18 13:30:58.649116 master-0 kubenswrapper[28504]: I0318 13:30:58.649044 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hcwd\" (UniqueName: \"kubernetes.io/projected/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-kube-api-access-8hcwd\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql\" (UID: \"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" Mar 18 13:30:58.649116 master-0 kubenswrapper[28504]: I0318 13:30:58.649122 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql\" (UID: \"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" Mar 18 13:30:58.649515 master-0 kubenswrapper[28504]: I0318 13:30:58.649184 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql\" (UID: \"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" Mar 18 13:30:58.750223 master-0 kubenswrapper[28504]: I0318 13:30:58.750149 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hcwd\" (UniqueName: \"kubernetes.io/projected/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-kube-api-access-8hcwd\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql\" (UID: \"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" Mar 18 13:30:58.750452 master-0 kubenswrapper[28504]: I0318 13:30:58.750239 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql\" (UID: \"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" Mar 18 13:30:58.750452 master-0 kubenswrapper[28504]: I0318 13:30:58.750302 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql\" (UID: \"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" Mar 18 13:30:58.750785 master-0 kubenswrapper[28504]: I0318 13:30:58.750759 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql\" (UID: \"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" Mar 18 13:30:58.751147 master-0 kubenswrapper[28504]: I0318 13:30:58.751097 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql\" (UID: \"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" Mar 18 13:30:58.771493 master-0 kubenswrapper[28504]: I0318 13:30:58.771433 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hcwd\" (UniqueName: \"kubernetes.io/projected/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-kube-api-access-8hcwd\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql\" (UID: \"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" Mar 18 13:30:58.898386 master-0 kubenswrapper[28504]: I0318 13:30:58.898244 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" Mar 18 13:30:59.416591 master-0 kubenswrapper[28504]: I0318 13:30:59.416536 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql"] Mar 18 13:30:59.417762 master-0 kubenswrapper[28504]: W0318 13:30:59.417676 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16a08d7d_afb3_4a8b_b4cb_c3e2001f8414.slice/crio-58468b0dd41c7890e9a9e8b00c129e473143ae68bd036b6963667909c4684037 WatchSource:0}: Error finding container 58468b0dd41c7890e9a9e8b00c129e473143ae68bd036b6963667909c4684037: Status 404 returned error can't find the container with id 58468b0dd41c7890e9a9e8b00c129e473143ae68bd036b6963667909c4684037 Mar 18 13:30:59.941255 master-0 kubenswrapper[28504]: I0318 13:30:59.941158 28504 generic.go:334] "Generic (PLEG): container finished" podID="16a08d7d-afb3-4a8b-b4cb-c3e2001f8414" containerID="67c122e04011f9fe940d3e43f50d8243c2dee87466abf92fa519d34be26b5570" exitCode=0 Mar 18 13:30:59.941781 master-0 kubenswrapper[28504]: I0318 13:30:59.941222 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" event={"ID":"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414","Type":"ContainerDied","Data":"67c122e04011f9fe940d3e43f50d8243c2dee87466abf92fa519d34be26b5570"} Mar 18 13:30:59.941781 master-0 kubenswrapper[28504]: I0318 13:30:59.941294 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" event={"ID":"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414","Type":"ContainerStarted","Data":"58468b0dd41c7890e9a9e8b00c129e473143ae68bd036b6963667909c4684037"} Mar 18 13:30:59.944526 master-0 kubenswrapper[28504]: I0318 13:30:59.944303 28504 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 13:30:59.947595 master-0 kubenswrapper[28504]: I0318 13:30:59.947527 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-86544c5fdf-7nkss" podUID="b94880f8-cc2a-4724-adaa-d729d2ef9b1d" containerName="console" containerID="cri-o://10059b05317dd15f63d6e7f2dcacbfb2cc4d517ca8d281c1e406b8dd1c66da13" gracePeriod=15 Mar 18 13:31:00.375071 master-0 kubenswrapper[28504]: I0318 13:31:00.375012 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-86544c5fdf-7nkss_b94880f8-cc2a-4724-adaa-d729d2ef9b1d/console/0.log" Mar 18 13:31:00.375350 master-0 kubenswrapper[28504]: I0318 13:31:00.375118 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:31:00.500485 master-0 kubenswrapper[28504]: I0318 13:31:00.500407 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-service-ca\") pod \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " Mar 18 13:31:00.500723 master-0 kubenswrapper[28504]: I0318 13:31:00.500516 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-trusted-ca-bundle\") pod \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " Mar 18 13:31:00.500723 master-0 kubenswrapper[28504]: I0318 13:31:00.500545 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-config\") pod \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " Mar 18 13:31:00.500723 master-0 kubenswrapper[28504]: I0318 13:31:00.500579 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-serving-cert\") pod \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " Mar 18 13:31:00.500723 master-0 kubenswrapper[28504]: I0318 13:31:00.500660 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-oauth-config\") pod \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " Mar 18 13:31:00.500723 master-0 kubenswrapper[28504]: I0318 13:31:00.500685 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbsnz\" (UniqueName: \"kubernetes.io/projected/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-kube-api-access-lbsnz\") pod \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " Mar 18 13:31:00.500723 master-0 kubenswrapper[28504]: I0318 13:31:00.500709 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-oauth-serving-cert\") pod \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\" (UID: \"b94880f8-cc2a-4724-adaa-d729d2ef9b1d\") " Mar 18 13:31:00.501194 master-0 kubenswrapper[28504]: I0318 13:31:00.501129 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-service-ca" (OuterVolumeSpecName: "service-ca") pod "b94880f8-cc2a-4724-adaa-d729d2ef9b1d" (UID: "b94880f8-cc2a-4724-adaa-d729d2ef9b1d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:31:00.501744 master-0 kubenswrapper[28504]: I0318 13:31:00.501703 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b94880f8-cc2a-4724-adaa-d729d2ef9b1d" (UID: "b94880f8-cc2a-4724-adaa-d729d2ef9b1d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:31:00.502065 master-0 kubenswrapper[28504]: I0318 13:31:00.502008 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b94880f8-cc2a-4724-adaa-d729d2ef9b1d" (UID: "b94880f8-cc2a-4724-adaa-d729d2ef9b1d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:31:00.502280 master-0 kubenswrapper[28504]: I0318 13:31:00.502212 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-config" (OuterVolumeSpecName: "console-config") pod "b94880f8-cc2a-4724-adaa-d729d2ef9b1d" (UID: "b94880f8-cc2a-4724-adaa-d729d2ef9b1d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:31:00.504255 master-0 kubenswrapper[28504]: I0318 13:31:00.504197 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b94880f8-cc2a-4724-adaa-d729d2ef9b1d" (UID: "b94880f8-cc2a-4724-adaa-d729d2ef9b1d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:31:00.505794 master-0 kubenswrapper[28504]: I0318 13:31:00.505201 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b94880f8-cc2a-4724-adaa-d729d2ef9b1d" (UID: "b94880f8-cc2a-4724-adaa-d729d2ef9b1d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:31:00.506034 master-0 kubenswrapper[28504]: I0318 13:31:00.505929 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-kube-api-access-lbsnz" (OuterVolumeSpecName: "kube-api-access-lbsnz") pod "b94880f8-cc2a-4724-adaa-d729d2ef9b1d" (UID: "b94880f8-cc2a-4724-adaa-d729d2ef9b1d"). InnerVolumeSpecName "kube-api-access-lbsnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:31:00.602410 master-0 kubenswrapper[28504]: I0318 13:31:00.602242 28504 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:00.602410 master-0 kubenswrapper[28504]: I0318 13:31:00.602326 28504 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:00.602410 master-0 kubenswrapper[28504]: I0318 13:31:00.602340 28504 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:00.602410 master-0 kubenswrapper[28504]: I0318 13:31:00.602352 28504 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:00.602410 master-0 kubenswrapper[28504]: I0318 13:31:00.602364 28504 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:00.602410 master-0 kubenswrapper[28504]: I0318 13:31:00.602375 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbsnz\" (UniqueName: \"kubernetes.io/projected/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-kube-api-access-lbsnz\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:00.602410 master-0 kubenswrapper[28504]: I0318 13:31:00.602384 28504 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b94880f8-cc2a-4724-adaa-d729d2ef9b1d-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:00.950450 master-0 kubenswrapper[28504]: I0318 13:31:00.950369 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-86544c5fdf-7nkss_b94880f8-cc2a-4724-adaa-d729d2ef9b1d/console/0.log" Mar 18 13:31:00.950450 master-0 kubenswrapper[28504]: I0318 13:31:00.950440 28504 generic.go:334] "Generic (PLEG): container finished" podID="b94880f8-cc2a-4724-adaa-d729d2ef9b1d" containerID="10059b05317dd15f63d6e7f2dcacbfb2cc4d517ca8d281c1e406b8dd1c66da13" exitCode=2 Mar 18 13:31:00.951081 master-0 kubenswrapper[28504]: I0318 13:31:00.950483 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86544c5fdf-7nkss" event={"ID":"b94880f8-cc2a-4724-adaa-d729d2ef9b1d","Type":"ContainerDied","Data":"10059b05317dd15f63d6e7f2dcacbfb2cc4d517ca8d281c1e406b8dd1c66da13"} Mar 18 13:31:00.951081 master-0 kubenswrapper[28504]: I0318 13:31:00.950517 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86544c5fdf-7nkss" event={"ID":"b94880f8-cc2a-4724-adaa-d729d2ef9b1d","Type":"ContainerDied","Data":"12697d38e1cd58a09343fac41cff8c082876fcfe1396e1b2bc01c38ebbb9d65b"} Mar 18 13:31:00.951081 master-0 kubenswrapper[28504]: I0318 13:31:00.950520 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86544c5fdf-7nkss" Mar 18 13:31:00.951081 master-0 kubenswrapper[28504]: I0318 13:31:00.950537 28504 scope.go:117] "RemoveContainer" containerID="10059b05317dd15f63d6e7f2dcacbfb2cc4d517ca8d281c1e406b8dd1c66da13" Mar 18 13:31:00.971291 master-0 kubenswrapper[28504]: I0318 13:31:00.971183 28504 scope.go:117] "RemoveContainer" containerID="10059b05317dd15f63d6e7f2dcacbfb2cc4d517ca8d281c1e406b8dd1c66da13" Mar 18 13:31:00.972910 master-0 kubenswrapper[28504]: E0318 13:31:00.972839 28504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10059b05317dd15f63d6e7f2dcacbfb2cc4d517ca8d281c1e406b8dd1c66da13\": container with ID starting with 10059b05317dd15f63d6e7f2dcacbfb2cc4d517ca8d281c1e406b8dd1c66da13 not found: ID does not exist" containerID="10059b05317dd15f63d6e7f2dcacbfb2cc4d517ca8d281c1e406b8dd1c66da13" Mar 18 13:31:00.973030 master-0 kubenswrapper[28504]: I0318 13:31:00.972913 28504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10059b05317dd15f63d6e7f2dcacbfb2cc4d517ca8d281c1e406b8dd1c66da13"} err="failed to get container status \"10059b05317dd15f63d6e7f2dcacbfb2cc4d517ca8d281c1e406b8dd1c66da13\": rpc error: code = NotFound desc = could not find container \"10059b05317dd15f63d6e7f2dcacbfb2cc4d517ca8d281c1e406b8dd1c66da13\": container with ID starting with 10059b05317dd15f63d6e7f2dcacbfb2cc4d517ca8d281c1e406b8dd1c66da13 not found: ID does not exist" Mar 18 13:31:00.979887 master-0 kubenswrapper[28504]: I0318 13:31:00.979803 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-86544c5fdf-7nkss"] Mar 18 13:31:00.985343 master-0 kubenswrapper[28504]: I0318 13:31:00.985234 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-86544c5fdf-7nkss"] Mar 18 13:31:01.961108 master-0 kubenswrapper[28504]: I0318 13:31:01.961004 28504 generic.go:334] "Generic (PLEG): container finished" podID="16a08d7d-afb3-4a8b-b4cb-c3e2001f8414" containerID="9ae78e176fe5d7553864b19c72f655d3503abe67b5b7f5347f169e51a1a5b8e8" exitCode=0 Mar 18 13:31:01.961108 master-0 kubenswrapper[28504]: I0318 13:31:01.961081 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" event={"ID":"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414","Type":"ContainerDied","Data":"9ae78e176fe5d7553864b19c72f655d3503abe67b5b7f5347f169e51a1a5b8e8"} Mar 18 13:31:02.757026 master-0 kubenswrapper[28504]: I0318 13:31:02.756965 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b94880f8-cc2a-4724-adaa-d729d2ef9b1d" path="/var/lib/kubelet/pods/b94880f8-cc2a-4724-adaa-d729d2ef9b1d/volumes" Mar 18 13:31:02.970199 master-0 kubenswrapper[28504]: I0318 13:31:02.969672 28504 generic.go:334] "Generic (PLEG): container finished" podID="16a08d7d-afb3-4a8b-b4cb-c3e2001f8414" containerID="e06f757f4d378782114b4260c56f55cb507cced0a9a432eff0f007122b6d406c" exitCode=0 Mar 18 13:31:02.970199 master-0 kubenswrapper[28504]: I0318 13:31:02.969716 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" event={"ID":"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414","Type":"ContainerDied","Data":"e06f757f4d378782114b4260c56f55cb507cced0a9a432eff0f007122b6d406c"} Mar 18 13:31:04.213042 master-0 kubenswrapper[28504]: I0318 13:31:04.211116 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" Mar 18 13:31:04.373548 master-0 kubenswrapper[28504]: I0318 13:31:04.373406 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-bundle\") pod \"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414\" (UID: \"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414\") " Mar 18 13:31:04.373548 master-0 kubenswrapper[28504]: I0318 13:31:04.373532 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hcwd\" (UniqueName: \"kubernetes.io/projected/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-kube-api-access-8hcwd\") pod \"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414\" (UID: \"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414\") " Mar 18 13:31:04.373788 master-0 kubenswrapper[28504]: I0318 13:31:04.373655 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-util\") pod \"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414\" (UID: \"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414\") " Mar 18 13:31:04.374965 master-0 kubenswrapper[28504]: I0318 13:31:04.374910 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-bundle" (OuterVolumeSpecName: "bundle") pod "16a08d7d-afb3-4a8b-b4cb-c3e2001f8414" (UID: "16a08d7d-afb3-4a8b-b4cb-c3e2001f8414"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:31:04.376850 master-0 kubenswrapper[28504]: I0318 13:31:04.376812 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-kube-api-access-8hcwd" (OuterVolumeSpecName: "kube-api-access-8hcwd") pod "16a08d7d-afb3-4a8b-b4cb-c3e2001f8414" (UID: "16a08d7d-afb3-4a8b-b4cb-c3e2001f8414"). InnerVolumeSpecName "kube-api-access-8hcwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:31:04.395486 master-0 kubenswrapper[28504]: I0318 13:31:04.395418 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-util" (OuterVolumeSpecName: "util") pod "16a08d7d-afb3-4a8b-b4cb-c3e2001f8414" (UID: "16a08d7d-afb3-4a8b-b4cb-c3e2001f8414"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:31:04.475043 master-0 kubenswrapper[28504]: I0318 13:31:04.474983 28504 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-util\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:04.475043 master-0 kubenswrapper[28504]: I0318 13:31:04.475019 28504 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:04.475043 master-0 kubenswrapper[28504]: I0318 13:31:04.475040 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hcwd\" (UniqueName: \"kubernetes.io/projected/16a08d7d-afb3-4a8b-b4cb-c3e2001f8414-kube-api-access-8hcwd\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:04.985687 master-0 kubenswrapper[28504]: I0318 13:31:04.985619 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" event={"ID":"16a08d7d-afb3-4a8b-b4cb-c3e2001f8414","Type":"ContainerDied","Data":"58468b0dd41c7890e9a9e8b00c129e473143ae68bd036b6963667909c4684037"} Mar 18 13:31:04.985687 master-0 kubenswrapper[28504]: I0318 13:31:04.985681 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58468b0dd41c7890e9a9e8b00c129e473143ae68bd036b6963667909c4684037" Mar 18 13:31:04.985969 master-0 kubenswrapper[28504]: I0318 13:31:04.985775 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4b59ql" Mar 18 13:31:11.616039 master-0 kubenswrapper[28504]: I0318 13:31:11.615890 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-6dbdc6c64-kjqlc"] Mar 18 13:31:11.616702 master-0 kubenswrapper[28504]: E0318 13:31:11.616283 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b94880f8-cc2a-4724-adaa-d729d2ef9b1d" containerName="console" Mar 18 13:31:11.616702 master-0 kubenswrapper[28504]: I0318 13:31:11.616298 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="b94880f8-cc2a-4724-adaa-d729d2ef9b1d" containerName="console" Mar 18 13:31:11.616702 master-0 kubenswrapper[28504]: E0318 13:31:11.616326 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a08d7d-afb3-4a8b-b4cb-c3e2001f8414" containerName="pull" Mar 18 13:31:11.616702 master-0 kubenswrapper[28504]: I0318 13:31:11.616332 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a08d7d-afb3-4a8b-b4cb-c3e2001f8414" containerName="pull" Mar 18 13:31:11.616702 master-0 kubenswrapper[28504]: E0318 13:31:11.616357 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a08d7d-afb3-4a8b-b4cb-c3e2001f8414" containerName="extract" Mar 18 13:31:11.616702 master-0 kubenswrapper[28504]: I0318 13:31:11.616370 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a08d7d-afb3-4a8b-b4cb-c3e2001f8414" containerName="extract" Mar 18 13:31:11.616702 master-0 kubenswrapper[28504]: E0318 13:31:11.616387 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a08d7d-afb3-4a8b-b4cb-c3e2001f8414" containerName="util" Mar 18 13:31:11.616702 master-0 kubenswrapper[28504]: I0318 13:31:11.616398 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a08d7d-afb3-4a8b-b4cb-c3e2001f8414" containerName="util" Mar 18 13:31:11.616702 master-0 kubenswrapper[28504]: I0318 13:31:11.616535 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="16a08d7d-afb3-4a8b-b4cb-c3e2001f8414" containerName="extract" Mar 18 13:31:11.616702 master-0 kubenswrapper[28504]: I0318 13:31:11.616609 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="b94880f8-cc2a-4724-adaa-d729d2ef9b1d" containerName="console" Mar 18 13:31:11.617264 master-0 kubenswrapper[28504]: I0318 13:31:11.617236 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:11.622760 master-0 kubenswrapper[28504]: I0318 13:31:11.620296 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 18 13:31:11.622760 master-0 kubenswrapper[28504]: I0318 13:31:11.620387 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 18 13:31:11.622760 master-0 kubenswrapper[28504]: I0318 13:31:11.620549 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 18 13:31:11.622760 master-0 kubenswrapper[28504]: I0318 13:31:11.620732 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 18 13:31:11.622760 master-0 kubenswrapper[28504]: I0318 13:31:11.622402 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 18 13:31:11.669518 master-0 kubenswrapper[28504]: I0318 13:31:11.669445 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-6dbdc6c64-kjqlc"] Mar 18 13:31:11.984267 master-0 kubenswrapper[28504]: I0318 13:31:11.984192 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/c3a74bb9-0939-4dd9-ad29-50ac6f179ee0-metrics-cert\") pod \"lvms-operator-6dbdc6c64-kjqlc\" (UID: \"c3a74bb9-0939-4dd9-ad29-50ac6f179ee0\") " pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:11.984492 master-0 kubenswrapper[28504]: I0318 13:31:11.984372 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9dkx\" (UniqueName: \"kubernetes.io/projected/c3a74bb9-0939-4dd9-ad29-50ac6f179ee0-kube-api-access-p9dkx\") pod \"lvms-operator-6dbdc6c64-kjqlc\" (UID: \"c3a74bb9-0939-4dd9-ad29-50ac6f179ee0\") " pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:11.984553 master-0 kubenswrapper[28504]: I0318 13:31:11.984508 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3a74bb9-0939-4dd9-ad29-50ac6f179ee0-apiservice-cert\") pod \"lvms-operator-6dbdc6c64-kjqlc\" (UID: \"c3a74bb9-0939-4dd9-ad29-50ac6f179ee0\") " pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:11.984589 master-0 kubenswrapper[28504]: I0318 13:31:11.984573 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3a74bb9-0939-4dd9-ad29-50ac6f179ee0-webhook-cert\") pod \"lvms-operator-6dbdc6c64-kjqlc\" (UID: \"c3a74bb9-0939-4dd9-ad29-50ac6f179ee0\") " pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:11.984630 master-0 kubenswrapper[28504]: I0318 13:31:11.984600 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c3a74bb9-0939-4dd9-ad29-50ac6f179ee0-socket-dir\") pod \"lvms-operator-6dbdc6c64-kjqlc\" (UID: \"c3a74bb9-0939-4dd9-ad29-50ac6f179ee0\") " pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:12.086916 master-0 kubenswrapper[28504]: I0318 13:31:12.086852 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9dkx\" (UniqueName: \"kubernetes.io/projected/c3a74bb9-0939-4dd9-ad29-50ac6f179ee0-kube-api-access-p9dkx\") pod \"lvms-operator-6dbdc6c64-kjqlc\" (UID: \"c3a74bb9-0939-4dd9-ad29-50ac6f179ee0\") " pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:12.087122 master-0 kubenswrapper[28504]: I0318 13:31:12.086974 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3a74bb9-0939-4dd9-ad29-50ac6f179ee0-apiservice-cert\") pod \"lvms-operator-6dbdc6c64-kjqlc\" (UID: \"c3a74bb9-0939-4dd9-ad29-50ac6f179ee0\") " pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:12.087122 master-0 kubenswrapper[28504]: I0318 13:31:12.087022 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3a74bb9-0939-4dd9-ad29-50ac6f179ee0-webhook-cert\") pod \"lvms-operator-6dbdc6c64-kjqlc\" (UID: \"c3a74bb9-0939-4dd9-ad29-50ac6f179ee0\") " pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:12.087122 master-0 kubenswrapper[28504]: I0318 13:31:12.087047 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c3a74bb9-0939-4dd9-ad29-50ac6f179ee0-socket-dir\") pod \"lvms-operator-6dbdc6c64-kjqlc\" (UID: \"c3a74bb9-0939-4dd9-ad29-50ac6f179ee0\") " pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:12.087122 master-0 kubenswrapper[28504]: I0318 13:31:12.087091 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/c3a74bb9-0939-4dd9-ad29-50ac6f179ee0-metrics-cert\") pod \"lvms-operator-6dbdc6c64-kjqlc\" (UID: \"c3a74bb9-0939-4dd9-ad29-50ac6f179ee0\") " pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:12.092379 master-0 kubenswrapper[28504]: I0318 13:31:12.092031 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c3a74bb9-0939-4dd9-ad29-50ac6f179ee0-socket-dir\") pod \"lvms-operator-6dbdc6c64-kjqlc\" (UID: \"c3a74bb9-0939-4dd9-ad29-50ac6f179ee0\") " pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:12.107741 master-0 kubenswrapper[28504]: I0318 13:31:12.107684 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/c3a74bb9-0939-4dd9-ad29-50ac6f179ee0-metrics-cert\") pod \"lvms-operator-6dbdc6c64-kjqlc\" (UID: \"c3a74bb9-0939-4dd9-ad29-50ac6f179ee0\") " pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:12.108672 master-0 kubenswrapper[28504]: I0318 13:31:12.108610 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3a74bb9-0939-4dd9-ad29-50ac6f179ee0-webhook-cert\") pod \"lvms-operator-6dbdc6c64-kjqlc\" (UID: \"c3a74bb9-0939-4dd9-ad29-50ac6f179ee0\") " pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:12.111973 master-0 kubenswrapper[28504]: I0318 13:31:12.109209 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3a74bb9-0939-4dd9-ad29-50ac6f179ee0-apiservice-cert\") pod \"lvms-operator-6dbdc6c64-kjqlc\" (UID: \"c3a74bb9-0939-4dd9-ad29-50ac6f179ee0\") " pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:12.203229 master-0 kubenswrapper[28504]: I0318 13:31:12.203196 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9dkx\" (UniqueName: \"kubernetes.io/projected/c3a74bb9-0939-4dd9-ad29-50ac6f179ee0-kube-api-access-p9dkx\") pod \"lvms-operator-6dbdc6c64-kjqlc\" (UID: \"c3a74bb9-0939-4dd9-ad29-50ac6f179ee0\") " pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:12.235051 master-0 kubenswrapper[28504]: I0318 13:31:12.234930 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:12.786990 master-0 kubenswrapper[28504]: I0318 13:31:12.786886 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-6dbdc6c64-kjqlc"] Mar 18 13:31:12.788734 master-0 kubenswrapper[28504]: W0318 13:31:12.788697 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3a74bb9_0939_4dd9_ad29_50ac6f179ee0.slice/crio-4509e58f729236dc089f35f096e43f22d421ff7a2ead9bd9064612a2a9bf819e WatchSource:0}: Error finding container 4509e58f729236dc089f35f096e43f22d421ff7a2ead9bd9064612a2a9bf819e: Status 404 returned error can't find the container with id 4509e58f729236dc089f35f096e43f22d421ff7a2ead9bd9064612a2a9bf819e Mar 18 13:31:13.287198 master-0 kubenswrapper[28504]: I0318 13:31:13.287098 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" event={"ID":"c3a74bb9-0939-4dd9-ad29-50ac6f179ee0","Type":"ContainerStarted","Data":"4509e58f729236dc089f35f096e43f22d421ff7a2ead9bd9064612a2a9bf819e"} Mar 18 13:31:19.337385 master-0 kubenswrapper[28504]: I0318 13:31:19.337314 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" event={"ID":"c3a74bb9-0939-4dd9-ad29-50ac6f179ee0","Type":"ContainerStarted","Data":"9e69a4d2137a88a1621c5cf63ac4d0d5cc629c96bc077015a62770b06a660214"} Mar 18 13:31:19.337925 master-0 kubenswrapper[28504]: I0318 13:31:19.337544 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:19.341127 master-0 kubenswrapper[28504]: I0318 13:31:19.341095 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" Mar 18 13:31:19.399206 master-0 kubenswrapper[28504]: I0318 13:31:19.399037 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-6dbdc6c64-kjqlc" podStartSLOduration=2.555979204 podStartE2EDuration="8.398993328s" podCreationTimestamp="2026-03-18 13:31:11 +0000 UTC" firstStartedPulling="2026-03-18 13:31:12.791877741 +0000 UTC m=+450.286683516" lastFinishedPulling="2026-03-18 13:31:18.634891865 +0000 UTC m=+456.129697640" observedRunningTime="2026-03-18 13:31:19.36779784 +0000 UTC m=+456.862603625" watchObservedRunningTime="2026-03-18 13:31:19.398993328 +0000 UTC m=+456.893799123" Mar 18 13:31:23.378115 master-0 kubenswrapper[28504]: I0318 13:31:23.378053 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6"] Mar 18 13:31:23.380030 master-0 kubenswrapper[28504]: I0318 13:31:23.379992 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" Mar 18 13:31:23.382464 master-0 kubenswrapper[28504]: I0318 13:31:23.382405 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-whvd2" Mar 18 13:31:23.414771 master-0 kubenswrapper[28504]: I0318 13:31:23.414624 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6"] Mar 18 13:31:23.561684 master-0 kubenswrapper[28504]: I0318 13:31:23.561612 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf92x\" (UniqueName: \"kubernetes.io/projected/a27785af-7ea3-4e72-b6c6-0c402f525cdd-kube-api-access-jf92x\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6\" (UID: \"a27785af-7ea3-4e72-b6c6-0c402f525cdd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" Mar 18 13:31:23.561917 master-0 kubenswrapper[28504]: I0318 13:31:23.561757 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a27785af-7ea3-4e72-b6c6-0c402f525cdd-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6\" (UID: \"a27785af-7ea3-4e72-b6c6-0c402f525cdd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" Mar 18 13:31:23.562218 master-0 kubenswrapper[28504]: I0318 13:31:23.562121 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a27785af-7ea3-4e72-b6c6-0c402f525cdd-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6\" (UID: \"a27785af-7ea3-4e72-b6c6-0c402f525cdd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" Mar 18 13:31:23.664093 master-0 kubenswrapper[28504]: I0318 13:31:23.663957 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a27785af-7ea3-4e72-b6c6-0c402f525cdd-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6\" (UID: \"a27785af-7ea3-4e72-b6c6-0c402f525cdd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" Mar 18 13:31:23.664310 master-0 kubenswrapper[28504]: I0318 13:31:23.664093 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a27785af-7ea3-4e72-b6c6-0c402f525cdd-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6\" (UID: \"a27785af-7ea3-4e72-b6c6-0c402f525cdd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" Mar 18 13:31:23.664364 master-0 kubenswrapper[28504]: I0318 13:31:23.664312 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf92x\" (UniqueName: \"kubernetes.io/projected/a27785af-7ea3-4e72-b6c6-0c402f525cdd-kube-api-access-jf92x\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6\" (UID: \"a27785af-7ea3-4e72-b6c6-0c402f525cdd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" Mar 18 13:31:23.664763 master-0 kubenswrapper[28504]: I0318 13:31:23.664720 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a27785af-7ea3-4e72-b6c6-0c402f525cdd-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6\" (UID: \"a27785af-7ea3-4e72-b6c6-0c402f525cdd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" Mar 18 13:31:23.665189 master-0 kubenswrapper[28504]: I0318 13:31:23.665159 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a27785af-7ea3-4e72-b6c6-0c402f525cdd-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6\" (UID: \"a27785af-7ea3-4e72-b6c6-0c402f525cdd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" Mar 18 13:31:23.684967 master-0 kubenswrapper[28504]: I0318 13:31:23.684857 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf92x\" (UniqueName: \"kubernetes.io/projected/a27785af-7ea3-4e72-b6c6-0c402f525cdd-kube-api-access-jf92x\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6\" (UID: \"a27785af-7ea3-4e72-b6c6-0c402f525cdd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" Mar 18 13:31:23.701673 master-0 kubenswrapper[28504]: I0318 13:31:23.701607 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" Mar 18 13:31:24.120217 master-0 kubenswrapper[28504]: I0318 13:31:24.120173 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6"] Mar 18 13:31:24.202868 master-0 kubenswrapper[28504]: I0318 13:31:24.202818 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98"] Mar 18 13:31:24.205034 master-0 kubenswrapper[28504]: I0318 13:31:24.204990 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" Mar 18 13:31:24.210657 master-0 kubenswrapper[28504]: I0318 13:31:24.210599 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98"] Mar 18 13:31:24.376289 master-0 kubenswrapper[28504]: I0318 13:31:24.376047 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfv95\" (UniqueName: \"kubernetes.io/projected/47b73dbd-51f5-490b-88cc-73aeff73ba27-kube-api-access-rfv95\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98\" (UID: \"47b73dbd-51f5-490b-88cc-73aeff73ba27\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" Mar 18 13:31:24.376526 master-0 kubenswrapper[28504]: I0318 13:31:24.376378 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47b73dbd-51f5-490b-88cc-73aeff73ba27-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98\" (UID: \"47b73dbd-51f5-490b-88cc-73aeff73ba27\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" Mar 18 13:31:24.376790 master-0 kubenswrapper[28504]: I0318 13:31:24.376755 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47b73dbd-51f5-490b-88cc-73aeff73ba27-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98\" (UID: \"47b73dbd-51f5-490b-88cc-73aeff73ba27\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" Mar 18 13:31:24.469633 master-0 kubenswrapper[28504]: I0318 13:31:24.469572 28504 generic.go:334] "Generic (PLEG): container finished" podID="a27785af-7ea3-4e72-b6c6-0c402f525cdd" containerID="994157fc922b98b96d243ce75eb62dc26889866fd77e3dfb995a0be5e102e4e2" exitCode=0 Mar 18 13:31:24.470209 master-0 kubenswrapper[28504]: I0318 13:31:24.469638 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" event={"ID":"a27785af-7ea3-4e72-b6c6-0c402f525cdd","Type":"ContainerDied","Data":"994157fc922b98b96d243ce75eb62dc26889866fd77e3dfb995a0be5e102e4e2"} Mar 18 13:31:24.470209 master-0 kubenswrapper[28504]: I0318 13:31:24.469695 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" event={"ID":"a27785af-7ea3-4e72-b6c6-0c402f525cdd","Type":"ContainerStarted","Data":"8df839c228f1183326ae03ea8bd3932af1c2b6444d96871e6a5dfa8886c980d3"} Mar 18 13:31:24.478351 master-0 kubenswrapper[28504]: I0318 13:31:24.478268 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfv95\" (UniqueName: \"kubernetes.io/projected/47b73dbd-51f5-490b-88cc-73aeff73ba27-kube-api-access-rfv95\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98\" (UID: \"47b73dbd-51f5-490b-88cc-73aeff73ba27\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" Mar 18 13:31:24.478589 master-0 kubenswrapper[28504]: I0318 13:31:24.478423 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47b73dbd-51f5-490b-88cc-73aeff73ba27-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98\" (UID: \"47b73dbd-51f5-490b-88cc-73aeff73ba27\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" Mar 18 13:31:24.478589 master-0 kubenswrapper[28504]: I0318 13:31:24.478535 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47b73dbd-51f5-490b-88cc-73aeff73ba27-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98\" (UID: \"47b73dbd-51f5-490b-88cc-73aeff73ba27\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" Mar 18 13:31:24.480340 master-0 kubenswrapper[28504]: I0318 13:31:24.479055 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47b73dbd-51f5-490b-88cc-73aeff73ba27-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98\" (UID: \"47b73dbd-51f5-490b-88cc-73aeff73ba27\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" Mar 18 13:31:24.480340 master-0 kubenswrapper[28504]: I0318 13:31:24.479251 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47b73dbd-51f5-490b-88cc-73aeff73ba27-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98\" (UID: \"47b73dbd-51f5-490b-88cc-73aeff73ba27\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" Mar 18 13:31:24.494100 master-0 kubenswrapper[28504]: I0318 13:31:24.494040 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfv95\" (UniqueName: \"kubernetes.io/projected/47b73dbd-51f5-490b-88cc-73aeff73ba27-kube-api-access-rfv95\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98\" (UID: \"47b73dbd-51f5-490b-88cc-73aeff73ba27\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" Mar 18 13:31:24.540067 master-0 kubenswrapper[28504]: I0318 13:31:24.539996 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" Mar 18 13:31:24.999591 master-0 kubenswrapper[28504]: I0318 13:31:24.997044 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98"] Mar 18 13:31:25.351287 master-0 kubenswrapper[28504]: I0318 13:31:25.351194 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8"] Mar 18 13:31:25.354491 master-0 kubenswrapper[28504]: I0318 13:31:25.354448 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" Mar 18 13:31:25.481849 master-0 kubenswrapper[28504]: I0318 13:31:25.481784 28504 generic.go:334] "Generic (PLEG): container finished" podID="47b73dbd-51f5-490b-88cc-73aeff73ba27" containerID="80028a29663615d2ba47b11d0abc32ba36177c9734aa7704c8487b49df78f346" exitCode=0 Mar 18 13:31:25.482512 master-0 kubenswrapper[28504]: I0318 13:31:25.481853 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" event={"ID":"47b73dbd-51f5-490b-88cc-73aeff73ba27","Type":"ContainerDied","Data":"80028a29663615d2ba47b11d0abc32ba36177c9734aa7704c8487b49df78f346"} Mar 18 13:31:25.482512 master-0 kubenswrapper[28504]: I0318 13:31:25.481981 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" event={"ID":"47b73dbd-51f5-490b-88cc-73aeff73ba27","Type":"ContainerStarted","Data":"61446f2ee31e4e6ae5ca8e28b1c9faf9c9e7042db4fff52d83e09074425cda2e"} Mar 18 13:31:25.499219 master-0 kubenswrapper[28504]: I0318 13:31:25.499108 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e711e65c-ca77-4723-9bde-907239370e87-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8\" (UID: \"e711e65c-ca77-4723-9bde-907239370e87\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" Mar 18 13:31:25.499669 master-0 kubenswrapper[28504]: I0318 13:31:25.499630 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt8s8\" (UniqueName: \"kubernetes.io/projected/e711e65c-ca77-4723-9bde-907239370e87-kube-api-access-rt8s8\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8\" (UID: \"e711e65c-ca77-4723-9bde-907239370e87\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" Mar 18 13:31:25.499737 master-0 kubenswrapper[28504]: I0318 13:31:25.499686 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e711e65c-ca77-4723-9bde-907239370e87-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8\" (UID: \"e711e65c-ca77-4723-9bde-907239370e87\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" Mar 18 13:31:25.601367 master-0 kubenswrapper[28504]: I0318 13:31:25.601198 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt8s8\" (UniqueName: \"kubernetes.io/projected/e711e65c-ca77-4723-9bde-907239370e87-kube-api-access-rt8s8\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8\" (UID: \"e711e65c-ca77-4723-9bde-907239370e87\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" Mar 18 13:31:25.601577 master-0 kubenswrapper[28504]: I0318 13:31:25.601530 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e711e65c-ca77-4723-9bde-907239370e87-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8\" (UID: \"e711e65c-ca77-4723-9bde-907239370e87\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" Mar 18 13:31:25.601726 master-0 kubenswrapper[28504]: I0318 13:31:25.601704 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e711e65c-ca77-4723-9bde-907239370e87-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8\" (UID: \"e711e65c-ca77-4723-9bde-907239370e87\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" Mar 18 13:31:25.602371 master-0 kubenswrapper[28504]: I0318 13:31:25.602316 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e711e65c-ca77-4723-9bde-907239370e87-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8\" (UID: \"e711e65c-ca77-4723-9bde-907239370e87\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" Mar 18 13:31:25.602449 master-0 kubenswrapper[28504]: I0318 13:31:25.602410 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e711e65c-ca77-4723-9bde-907239370e87-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8\" (UID: \"e711e65c-ca77-4723-9bde-907239370e87\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" Mar 18 13:31:25.841058 master-0 kubenswrapper[28504]: I0318 13:31:25.831905 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8"] Mar 18 13:31:25.841058 master-0 kubenswrapper[28504]: I0318 13:31:25.838461 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt8s8\" (UniqueName: \"kubernetes.io/projected/e711e65c-ca77-4723-9bde-907239370e87-kube-api-access-rt8s8\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8\" (UID: \"e711e65c-ca77-4723-9bde-907239370e87\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" Mar 18 13:31:26.012158 master-0 kubenswrapper[28504]: I0318 13:31:26.012072 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" Mar 18 13:31:26.439733 master-0 kubenswrapper[28504]: I0318 13:31:26.439682 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8"] Mar 18 13:31:26.441206 master-0 kubenswrapper[28504]: W0318 13:31:26.441151 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode711e65c_ca77_4723_9bde_907239370e87.slice/crio-7e58112e223183d22eb3d3af16a4bc89f649135ba91a6bcf0774a3ff3515f4a6 WatchSource:0}: Error finding container 7e58112e223183d22eb3d3af16a4bc89f649135ba91a6bcf0774a3ff3515f4a6: Status 404 returned error can't find the container with id 7e58112e223183d22eb3d3af16a4bc89f649135ba91a6bcf0774a3ff3515f4a6 Mar 18 13:31:26.490507 master-0 kubenswrapper[28504]: I0318 13:31:26.490451 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" event={"ID":"e711e65c-ca77-4723-9bde-907239370e87","Type":"ContainerStarted","Data":"7e58112e223183d22eb3d3af16a4bc89f649135ba91a6bcf0774a3ff3515f4a6"} Mar 18 13:31:27.500223 master-0 kubenswrapper[28504]: I0318 13:31:27.500139 28504 generic.go:334] "Generic (PLEG): container finished" podID="e711e65c-ca77-4723-9bde-907239370e87" containerID="9083691db751f84f49c273fdc675aa78b46ca9eaa70f2c38a674d97cab2e0f56" exitCode=0 Mar 18 13:31:27.500223 master-0 kubenswrapper[28504]: I0318 13:31:27.500204 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" event={"ID":"e711e65c-ca77-4723-9bde-907239370e87","Type":"ContainerDied","Data":"9083691db751f84f49c273fdc675aa78b46ca9eaa70f2c38a674d97cab2e0f56"} Mar 18 13:31:29.519449 master-0 kubenswrapper[28504]: I0318 13:31:29.519360 28504 generic.go:334] "Generic (PLEG): container finished" podID="47b73dbd-51f5-490b-88cc-73aeff73ba27" containerID="99e5ed9affd73e7db4aa9aa271731b3eead72b12e90b22be9a9a063b7440b613" exitCode=0 Mar 18 13:31:29.519449 master-0 kubenswrapper[28504]: I0318 13:31:29.519443 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" event={"ID":"47b73dbd-51f5-490b-88cc-73aeff73ba27","Type":"ContainerDied","Data":"99e5ed9affd73e7db4aa9aa271731b3eead72b12e90b22be9a9a063b7440b613"} Mar 18 13:31:29.522314 master-0 kubenswrapper[28504]: I0318 13:31:29.522231 28504 generic.go:334] "Generic (PLEG): container finished" podID="a27785af-7ea3-4e72-b6c6-0c402f525cdd" containerID="9f86571206abeff8a5eeaf15d76c445375907e83f06df66143fbc23ffcd02906" exitCode=0 Mar 18 13:31:29.522314 master-0 kubenswrapper[28504]: I0318 13:31:29.522294 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" event={"ID":"a27785af-7ea3-4e72-b6c6-0c402f525cdd","Type":"ContainerDied","Data":"9f86571206abeff8a5eeaf15d76c445375907e83f06df66143fbc23ffcd02906"} Mar 18 13:31:30.530432 master-0 kubenswrapper[28504]: I0318 13:31:30.530370 28504 generic.go:334] "Generic (PLEG): container finished" podID="47b73dbd-51f5-490b-88cc-73aeff73ba27" containerID="379e80cfaa641fc8a7703714cb554ca7da8789c8ac6ee47efe1d3b507f6bb095" exitCode=0 Mar 18 13:31:30.531173 master-0 kubenswrapper[28504]: I0318 13:31:30.530468 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" event={"ID":"47b73dbd-51f5-490b-88cc-73aeff73ba27","Type":"ContainerDied","Data":"379e80cfaa641fc8a7703714cb554ca7da8789c8ac6ee47efe1d3b507f6bb095"} Mar 18 13:31:30.532398 master-0 kubenswrapper[28504]: I0318 13:31:30.532356 28504 generic.go:334] "Generic (PLEG): container finished" podID="a27785af-7ea3-4e72-b6c6-0c402f525cdd" containerID="275c4270e76a4b3d889e69e03cc455a6fb03d205d5e9ccc926a633126c5a852c" exitCode=0 Mar 18 13:31:30.532491 master-0 kubenswrapper[28504]: I0318 13:31:30.532424 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" event={"ID":"a27785af-7ea3-4e72-b6c6-0c402f525cdd","Type":"ContainerDied","Data":"275c4270e76a4b3d889e69e03cc455a6fb03d205d5e9ccc926a633126c5a852c"} Mar 18 13:31:30.535094 master-0 kubenswrapper[28504]: I0318 13:31:30.535048 28504 generic.go:334] "Generic (PLEG): container finished" podID="e711e65c-ca77-4723-9bde-907239370e87" containerID="3a74deb02b4c3117fdfbc61b62020abefc2b67fc32c9147197f13c42e2f09ac8" exitCode=0 Mar 18 13:31:30.535094 master-0 kubenswrapper[28504]: I0318 13:31:30.535084 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" event={"ID":"e711e65c-ca77-4723-9bde-907239370e87","Type":"ContainerDied","Data":"3a74deb02b4c3117fdfbc61b62020abefc2b67fc32c9147197f13c42e2f09ac8"} Mar 18 13:31:31.544797 master-0 kubenswrapper[28504]: I0318 13:31:31.544731 28504 generic.go:334] "Generic (PLEG): container finished" podID="e711e65c-ca77-4723-9bde-907239370e87" containerID="8c20883b4027dfebf571b9c6d13b963178d448ee92d781aa17f524762d5857b6" exitCode=0 Mar 18 13:31:31.545332 master-0 kubenswrapper[28504]: I0318 13:31:31.544971 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" event={"ID":"e711e65c-ca77-4723-9bde-907239370e87","Type":"ContainerDied","Data":"8c20883b4027dfebf571b9c6d13b963178d448ee92d781aa17f524762d5857b6"} Mar 18 13:31:31.983880 master-0 kubenswrapper[28504]: I0318 13:31:31.983823 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" Mar 18 13:31:31.986680 master-0 kubenswrapper[28504]: I0318 13:31:31.986645 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" Mar 18 13:31:32.046705 master-0 kubenswrapper[28504]: I0318 13:31:32.046609 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47b73dbd-51f5-490b-88cc-73aeff73ba27-util\") pod \"47b73dbd-51f5-490b-88cc-73aeff73ba27\" (UID: \"47b73dbd-51f5-490b-88cc-73aeff73ba27\") " Mar 18 13:31:32.046705 master-0 kubenswrapper[28504]: I0318 13:31:32.046686 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfv95\" (UniqueName: \"kubernetes.io/projected/47b73dbd-51f5-490b-88cc-73aeff73ba27-kube-api-access-rfv95\") pod \"47b73dbd-51f5-490b-88cc-73aeff73ba27\" (UID: \"47b73dbd-51f5-490b-88cc-73aeff73ba27\") " Mar 18 13:31:32.047138 master-0 kubenswrapper[28504]: I0318 13:31:32.046725 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47b73dbd-51f5-490b-88cc-73aeff73ba27-bundle\") pod \"47b73dbd-51f5-490b-88cc-73aeff73ba27\" (UID: \"47b73dbd-51f5-490b-88cc-73aeff73ba27\") " Mar 18 13:31:32.047138 master-0 kubenswrapper[28504]: I0318 13:31:32.046863 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a27785af-7ea3-4e72-b6c6-0c402f525cdd-bundle\") pod \"a27785af-7ea3-4e72-b6c6-0c402f525cdd\" (UID: \"a27785af-7ea3-4e72-b6c6-0c402f525cdd\") " Mar 18 13:31:32.047138 master-0 kubenswrapper[28504]: I0318 13:31:32.046939 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a27785af-7ea3-4e72-b6c6-0c402f525cdd-util\") pod \"a27785af-7ea3-4e72-b6c6-0c402f525cdd\" (UID: \"a27785af-7ea3-4e72-b6c6-0c402f525cdd\") " Mar 18 13:31:32.047138 master-0 kubenswrapper[28504]: I0318 13:31:32.047032 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf92x\" (UniqueName: \"kubernetes.io/projected/a27785af-7ea3-4e72-b6c6-0c402f525cdd-kube-api-access-jf92x\") pod \"a27785af-7ea3-4e72-b6c6-0c402f525cdd\" (UID: \"a27785af-7ea3-4e72-b6c6-0c402f525cdd\") " Mar 18 13:31:32.048971 master-0 kubenswrapper[28504]: I0318 13:31:32.048889 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47b73dbd-51f5-490b-88cc-73aeff73ba27-bundle" (OuterVolumeSpecName: "bundle") pod "47b73dbd-51f5-490b-88cc-73aeff73ba27" (UID: "47b73dbd-51f5-490b-88cc-73aeff73ba27"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:31:32.049653 master-0 kubenswrapper[28504]: I0318 13:31:32.049591 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a27785af-7ea3-4e72-b6c6-0c402f525cdd-bundle" (OuterVolumeSpecName: "bundle") pod "a27785af-7ea3-4e72-b6c6-0c402f525cdd" (UID: "a27785af-7ea3-4e72-b6c6-0c402f525cdd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:31:32.050377 master-0 kubenswrapper[28504]: I0318 13:31:32.050324 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a27785af-7ea3-4e72-b6c6-0c402f525cdd-kube-api-access-jf92x" (OuterVolumeSpecName: "kube-api-access-jf92x") pod "a27785af-7ea3-4e72-b6c6-0c402f525cdd" (UID: "a27785af-7ea3-4e72-b6c6-0c402f525cdd"). InnerVolumeSpecName "kube-api-access-jf92x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:31:32.052059 master-0 kubenswrapper[28504]: I0318 13:31:32.052013 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b73dbd-51f5-490b-88cc-73aeff73ba27-kube-api-access-rfv95" (OuterVolumeSpecName: "kube-api-access-rfv95") pod "47b73dbd-51f5-490b-88cc-73aeff73ba27" (UID: "47b73dbd-51f5-490b-88cc-73aeff73ba27"). InnerVolumeSpecName "kube-api-access-rfv95". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:31:32.059640 master-0 kubenswrapper[28504]: I0318 13:31:32.059556 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47b73dbd-51f5-490b-88cc-73aeff73ba27-util" (OuterVolumeSpecName: "util") pod "47b73dbd-51f5-490b-88cc-73aeff73ba27" (UID: "47b73dbd-51f5-490b-88cc-73aeff73ba27"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:31:32.063972 master-0 kubenswrapper[28504]: I0318 13:31:32.063875 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a27785af-7ea3-4e72-b6c6-0c402f525cdd-util" (OuterVolumeSpecName: "util") pod "a27785af-7ea3-4e72-b6c6-0c402f525cdd" (UID: "a27785af-7ea3-4e72-b6c6-0c402f525cdd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:31:32.149204 master-0 kubenswrapper[28504]: I0318 13:31:32.149127 28504 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47b73dbd-51f5-490b-88cc-73aeff73ba27-util\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:32.149204 master-0 kubenswrapper[28504]: I0318 13:31:32.149182 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfv95\" (UniqueName: \"kubernetes.io/projected/47b73dbd-51f5-490b-88cc-73aeff73ba27-kube-api-access-rfv95\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:32.149204 master-0 kubenswrapper[28504]: I0318 13:31:32.149196 28504 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47b73dbd-51f5-490b-88cc-73aeff73ba27-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:32.149204 master-0 kubenswrapper[28504]: I0318 13:31:32.149208 28504 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a27785af-7ea3-4e72-b6c6-0c402f525cdd-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:32.149204 master-0 kubenswrapper[28504]: I0318 13:31:32.149218 28504 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a27785af-7ea3-4e72-b6c6-0c402f525cdd-util\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:32.149204 master-0 kubenswrapper[28504]: I0318 13:31:32.149229 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf92x\" (UniqueName: \"kubernetes.io/projected/a27785af-7ea3-4e72-b6c6-0c402f525cdd-kube-api-access-jf92x\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:32.558443 master-0 kubenswrapper[28504]: I0318 13:31:32.557698 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" Mar 18 13:31:32.558443 master-0 kubenswrapper[28504]: I0318 13:31:32.558008 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5w5qh6" event={"ID":"a27785af-7ea3-4e72-b6c6-0c402f525cdd","Type":"ContainerDied","Data":"8df839c228f1183326ae03ea8bd3932af1c2b6444d96871e6a5dfa8886c980d3"} Mar 18 13:31:32.558443 master-0 kubenswrapper[28504]: I0318 13:31:32.558059 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8df839c228f1183326ae03ea8bd3932af1c2b6444d96871e6a5dfa8886c980d3" Mar 18 13:31:32.560502 master-0 kubenswrapper[28504]: I0318 13:31:32.560433 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" event={"ID":"47b73dbd-51f5-490b-88cc-73aeff73ba27","Type":"ContainerDied","Data":"61446f2ee31e4e6ae5ca8e28b1c9faf9c9e7042db4fff52d83e09074425cda2e"} Mar 18 13:31:32.560582 master-0 kubenswrapper[28504]: I0318 13:31:32.560513 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61446f2ee31e4e6ae5ca8e28b1c9faf9c9e7042db4fff52d83e09074425cda2e" Mar 18 13:31:32.560582 master-0 kubenswrapper[28504]: I0318 13:31:32.560461 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18pq98" Mar 18 13:31:32.817696 master-0 kubenswrapper[28504]: I0318 13:31:32.817582 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" Mar 18 13:31:32.866305 master-0 kubenswrapper[28504]: I0318 13:31:32.864382 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e711e65c-ca77-4723-9bde-907239370e87-util\") pod \"e711e65c-ca77-4723-9bde-907239370e87\" (UID: \"e711e65c-ca77-4723-9bde-907239370e87\") " Mar 18 13:31:32.866305 master-0 kubenswrapper[28504]: I0318 13:31:32.864610 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt8s8\" (UniqueName: \"kubernetes.io/projected/e711e65c-ca77-4723-9bde-907239370e87-kube-api-access-rt8s8\") pod \"e711e65c-ca77-4723-9bde-907239370e87\" (UID: \"e711e65c-ca77-4723-9bde-907239370e87\") " Mar 18 13:31:32.866305 master-0 kubenswrapper[28504]: I0318 13:31:32.864641 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e711e65c-ca77-4723-9bde-907239370e87-bundle\") pod \"e711e65c-ca77-4723-9bde-907239370e87\" (UID: \"e711e65c-ca77-4723-9bde-907239370e87\") " Mar 18 13:31:32.866305 master-0 kubenswrapper[28504]: I0318 13:31:32.865366 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e711e65c-ca77-4723-9bde-907239370e87-bundle" (OuterVolumeSpecName: "bundle") pod "e711e65c-ca77-4723-9bde-907239370e87" (UID: "e711e65c-ca77-4723-9bde-907239370e87"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:31:32.869185 master-0 kubenswrapper[28504]: I0318 13:31:32.867169 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e711e65c-ca77-4723-9bde-907239370e87-kube-api-access-rt8s8" (OuterVolumeSpecName: "kube-api-access-rt8s8") pod "e711e65c-ca77-4723-9bde-907239370e87" (UID: "e711e65c-ca77-4723-9bde-907239370e87"). InnerVolumeSpecName "kube-api-access-rt8s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:31:32.875597 master-0 kubenswrapper[28504]: I0318 13:31:32.875528 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e711e65c-ca77-4723-9bde-907239370e87-util" (OuterVolumeSpecName: "util") pod "e711e65c-ca77-4723-9bde-907239370e87" (UID: "e711e65c-ca77-4723-9bde-907239370e87"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:31:32.966097 master-0 kubenswrapper[28504]: I0318 13:31:32.966033 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt8s8\" (UniqueName: \"kubernetes.io/projected/e711e65c-ca77-4723-9bde-907239370e87-kube-api-access-rt8s8\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:32.966097 master-0 kubenswrapper[28504]: I0318 13:31:32.966080 28504 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e711e65c-ca77-4723-9bde-907239370e87-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:32.966097 master-0 kubenswrapper[28504]: I0318 13:31:32.966091 28504 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e711e65c-ca77-4723-9bde-907239370e87-util\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:33.572256 master-0 kubenswrapper[28504]: I0318 13:31:33.572181 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" event={"ID":"e711e65c-ca77-4723-9bde-907239370e87","Type":"ContainerDied","Data":"7e58112e223183d22eb3d3af16a4bc89f649135ba91a6bcf0774a3ff3515f4a6"} Mar 18 13:31:33.572256 master-0 kubenswrapper[28504]: I0318 13:31:33.572240 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874c96w8" Mar 18 13:31:33.572256 master-0 kubenswrapper[28504]: I0318 13:31:33.572262 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e58112e223183d22eb3d3af16a4bc89f649135ba91a6bcf0774a3ff3515f4a6" Mar 18 13:31:33.783471 master-0 kubenswrapper[28504]: I0318 13:31:33.783400 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9"] Mar 18 13:31:33.783774 master-0 kubenswrapper[28504]: E0318 13:31:33.783751 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e711e65c-ca77-4723-9bde-907239370e87" containerName="pull" Mar 18 13:31:33.783774 master-0 kubenswrapper[28504]: I0318 13:31:33.783772 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="e711e65c-ca77-4723-9bde-907239370e87" containerName="pull" Mar 18 13:31:33.783860 master-0 kubenswrapper[28504]: E0318 13:31:33.783806 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e711e65c-ca77-4723-9bde-907239370e87" containerName="util" Mar 18 13:31:33.783860 master-0 kubenswrapper[28504]: I0318 13:31:33.783814 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="e711e65c-ca77-4723-9bde-907239370e87" containerName="util" Mar 18 13:31:33.783860 master-0 kubenswrapper[28504]: E0318 13:31:33.783836 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27785af-7ea3-4e72-b6c6-0c402f525cdd" containerName="util" Mar 18 13:31:33.783860 master-0 kubenswrapper[28504]: I0318 13:31:33.783844 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27785af-7ea3-4e72-b6c6-0c402f525cdd" containerName="util" Mar 18 13:31:33.783860 master-0 kubenswrapper[28504]: E0318 13:31:33.783853 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b73dbd-51f5-490b-88cc-73aeff73ba27" containerName="extract" Mar 18 13:31:33.783860 master-0 kubenswrapper[28504]: I0318 13:31:33.783861 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b73dbd-51f5-490b-88cc-73aeff73ba27" containerName="extract" Mar 18 13:31:33.784140 master-0 kubenswrapper[28504]: E0318 13:31:33.783870 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b73dbd-51f5-490b-88cc-73aeff73ba27" containerName="util" Mar 18 13:31:33.784140 master-0 kubenswrapper[28504]: I0318 13:31:33.783881 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b73dbd-51f5-490b-88cc-73aeff73ba27" containerName="util" Mar 18 13:31:33.784140 master-0 kubenswrapper[28504]: E0318 13:31:33.783890 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27785af-7ea3-4e72-b6c6-0c402f525cdd" containerName="pull" Mar 18 13:31:33.784140 master-0 kubenswrapper[28504]: I0318 13:31:33.783897 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27785af-7ea3-4e72-b6c6-0c402f525cdd" containerName="pull" Mar 18 13:31:33.784140 master-0 kubenswrapper[28504]: E0318 13:31:33.783910 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e711e65c-ca77-4723-9bde-907239370e87" containerName="extract" Mar 18 13:31:33.784140 master-0 kubenswrapper[28504]: I0318 13:31:33.783918 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="e711e65c-ca77-4723-9bde-907239370e87" containerName="extract" Mar 18 13:31:33.784140 master-0 kubenswrapper[28504]: E0318 13:31:33.783934 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27785af-7ea3-4e72-b6c6-0c402f525cdd" containerName="extract" Mar 18 13:31:33.784140 master-0 kubenswrapper[28504]: I0318 13:31:33.783959 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27785af-7ea3-4e72-b6c6-0c402f525cdd" containerName="extract" Mar 18 13:31:33.784140 master-0 kubenswrapper[28504]: E0318 13:31:33.783970 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b73dbd-51f5-490b-88cc-73aeff73ba27" containerName="pull" Mar 18 13:31:33.784140 master-0 kubenswrapper[28504]: I0318 13:31:33.783977 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b73dbd-51f5-490b-88cc-73aeff73ba27" containerName="pull" Mar 18 13:31:33.784140 master-0 kubenswrapper[28504]: I0318 13:31:33.784133 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="47b73dbd-51f5-490b-88cc-73aeff73ba27" containerName="extract" Mar 18 13:31:33.784485 master-0 kubenswrapper[28504]: I0318 13:31:33.784178 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="a27785af-7ea3-4e72-b6c6-0c402f525cdd" containerName="extract" Mar 18 13:31:33.784485 master-0 kubenswrapper[28504]: I0318 13:31:33.784194 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="e711e65c-ca77-4723-9bde-907239370e87" containerName="extract" Mar 18 13:31:33.785311 master-0 kubenswrapper[28504]: I0318 13:31:33.785269 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" Mar 18 13:31:33.787526 master-0 kubenswrapper[28504]: I0318 13:31:33.787472 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-whvd2" Mar 18 13:31:33.843577 master-0 kubenswrapper[28504]: I0318 13:31:33.843519 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9"] Mar 18 13:31:33.880858 master-0 kubenswrapper[28504]: I0318 13:31:33.880775 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzcbv\" (UniqueName: \"kubernetes.io/projected/3fb697e8-6f9d-4da2-a8dd-e16382be11df-kube-api-access-kzcbv\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9\" (UID: \"3fb697e8-6f9d-4da2-a8dd-e16382be11df\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" Mar 18 13:31:33.881383 master-0 kubenswrapper[28504]: I0318 13:31:33.881168 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fb697e8-6f9d-4da2-a8dd-e16382be11df-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9\" (UID: \"3fb697e8-6f9d-4da2-a8dd-e16382be11df\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" Mar 18 13:31:33.881495 master-0 kubenswrapper[28504]: I0318 13:31:33.881462 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fb697e8-6f9d-4da2-a8dd-e16382be11df-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9\" (UID: \"3fb697e8-6f9d-4da2-a8dd-e16382be11df\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" Mar 18 13:31:33.982665 master-0 kubenswrapper[28504]: I0318 13:31:33.982590 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fb697e8-6f9d-4da2-a8dd-e16382be11df-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9\" (UID: \"3fb697e8-6f9d-4da2-a8dd-e16382be11df\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" Mar 18 13:31:33.983022 master-0 kubenswrapper[28504]: I0318 13:31:33.982997 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fb697e8-6f9d-4da2-a8dd-e16382be11df-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9\" (UID: \"3fb697e8-6f9d-4da2-a8dd-e16382be11df\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" Mar 18 13:31:33.983229 master-0 kubenswrapper[28504]: I0318 13:31:33.983165 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fb697e8-6f9d-4da2-a8dd-e16382be11df-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9\" (UID: \"3fb697e8-6f9d-4da2-a8dd-e16382be11df\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" Mar 18 13:31:33.983290 master-0 kubenswrapper[28504]: I0318 13:31:33.983238 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzcbv\" (UniqueName: \"kubernetes.io/projected/3fb697e8-6f9d-4da2-a8dd-e16382be11df-kube-api-access-kzcbv\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9\" (UID: \"3fb697e8-6f9d-4da2-a8dd-e16382be11df\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" Mar 18 13:31:33.983584 master-0 kubenswrapper[28504]: I0318 13:31:33.983545 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fb697e8-6f9d-4da2-a8dd-e16382be11df-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9\" (UID: \"3fb697e8-6f9d-4da2-a8dd-e16382be11df\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" Mar 18 13:31:33.999119 master-0 kubenswrapper[28504]: I0318 13:31:33.999058 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzcbv\" (UniqueName: \"kubernetes.io/projected/3fb697e8-6f9d-4da2-a8dd-e16382be11df-kube-api-access-kzcbv\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9\" (UID: \"3fb697e8-6f9d-4da2-a8dd-e16382be11df\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" Mar 18 13:31:34.161170 master-0 kubenswrapper[28504]: I0318 13:31:34.161031 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" Mar 18 13:31:34.600251 master-0 kubenswrapper[28504]: I0318 13:31:34.599959 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9"] Mar 18 13:31:35.589881 master-0 kubenswrapper[28504]: I0318 13:31:35.589823 28504 generic.go:334] "Generic (PLEG): container finished" podID="3fb697e8-6f9d-4da2-a8dd-e16382be11df" containerID="6eca9b4797ca9e08d224ca0d10f594f465a077af1facd8bb946a6e87a69d4826" exitCode=0 Mar 18 13:31:35.590134 master-0 kubenswrapper[28504]: I0318 13:31:35.589891 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" event={"ID":"3fb697e8-6f9d-4da2-a8dd-e16382be11df","Type":"ContainerDied","Data":"6eca9b4797ca9e08d224ca0d10f594f465a077af1facd8bb946a6e87a69d4826"} Mar 18 13:31:35.590134 master-0 kubenswrapper[28504]: I0318 13:31:35.589926 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" event={"ID":"3fb697e8-6f9d-4da2-a8dd-e16382be11df","Type":"ContainerStarted","Data":"5b76e16fb9c966321e4a601f8509faa43f185c3f8cd133636aa10fdef78add38"} Mar 18 13:31:36.775973 master-0 kubenswrapper[28504]: I0318 13:31:36.774385 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-q9l29"] Mar 18 13:31:36.775973 master-0 kubenswrapper[28504]: I0318 13:31:36.775361 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-q9l29" Mar 18 13:31:36.781515 master-0 kubenswrapper[28504]: I0318 13:31:36.781409 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 18 13:31:36.781808 master-0 kubenswrapper[28504]: I0318 13:31:36.781770 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 18 13:31:36.782567 master-0 kubenswrapper[28504]: I0318 13:31:36.782513 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-q9l29"] Mar 18 13:31:36.821083 master-0 kubenswrapper[28504]: I0318 13:31:36.820979 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhpp2\" (UniqueName: \"kubernetes.io/projected/58479c1b-d11f-4cfa-940d-5020366c3f41-kube-api-access-hhpp2\") pod \"cert-manager-operator-controller-manager-66c8bdd694-q9l29\" (UID: \"58479c1b-d11f-4cfa-940d-5020366c3f41\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-q9l29" Mar 18 13:31:36.821083 master-0 kubenswrapper[28504]: I0318 13:31:36.821074 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58479c1b-d11f-4cfa-940d-5020366c3f41-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-q9l29\" (UID: \"58479c1b-d11f-4cfa-940d-5020366c3f41\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-q9l29" Mar 18 13:31:36.924732 master-0 kubenswrapper[28504]: I0318 13:31:36.924144 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58479c1b-d11f-4cfa-940d-5020366c3f41-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-q9l29\" (UID: \"58479c1b-d11f-4cfa-940d-5020366c3f41\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-q9l29" Mar 18 13:31:36.924732 master-0 kubenswrapper[28504]: I0318 13:31:36.924294 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhpp2\" (UniqueName: \"kubernetes.io/projected/58479c1b-d11f-4cfa-940d-5020366c3f41-kube-api-access-hhpp2\") pod \"cert-manager-operator-controller-manager-66c8bdd694-q9l29\" (UID: \"58479c1b-d11f-4cfa-940d-5020366c3f41\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-q9l29" Mar 18 13:31:36.925132 master-0 kubenswrapper[28504]: I0318 13:31:36.925081 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58479c1b-d11f-4cfa-940d-5020366c3f41-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-q9l29\" (UID: \"58479c1b-d11f-4cfa-940d-5020366c3f41\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-q9l29" Mar 18 13:31:36.950877 master-0 kubenswrapper[28504]: I0318 13:31:36.950802 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhpp2\" (UniqueName: \"kubernetes.io/projected/58479c1b-d11f-4cfa-940d-5020366c3f41-kube-api-access-hhpp2\") pod \"cert-manager-operator-controller-manager-66c8bdd694-q9l29\" (UID: \"58479c1b-d11f-4cfa-940d-5020366c3f41\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-q9l29" Mar 18 13:31:37.102209 master-0 kubenswrapper[28504]: I0318 13:31:37.101783 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-q9l29" Mar 18 13:31:37.578627 master-0 kubenswrapper[28504]: I0318 13:31:37.578555 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-q9l29"] Mar 18 13:31:37.658151 master-0 kubenswrapper[28504]: I0318 13:31:37.658040 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-q9l29" event={"ID":"58479c1b-d11f-4cfa-940d-5020366c3f41","Type":"ContainerStarted","Data":"804a9bb88ec4756aeab57a40900ee8605ea757548dbe347e0020a5c1f2bba50c"} Mar 18 13:31:37.661505 master-0 kubenswrapper[28504]: I0318 13:31:37.661438 28504 generic.go:334] "Generic (PLEG): container finished" podID="3fb697e8-6f9d-4da2-a8dd-e16382be11df" containerID="45347007d004199dcaf26f03a272c14b22086fcf574e60bc8a9af7aec8b5326d" exitCode=0 Mar 18 13:31:37.661505 master-0 kubenswrapper[28504]: I0318 13:31:37.661495 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" event={"ID":"3fb697e8-6f9d-4da2-a8dd-e16382be11df","Type":"ContainerDied","Data":"45347007d004199dcaf26f03a272c14b22086fcf574e60bc8a9af7aec8b5326d"} Mar 18 13:31:38.673802 master-0 kubenswrapper[28504]: I0318 13:31:38.673751 28504 generic.go:334] "Generic (PLEG): container finished" podID="3fb697e8-6f9d-4da2-a8dd-e16382be11df" containerID="34280e25760ab6e706198b7404933869ccb794e32ccc448f2cb2690139535056" exitCode=0 Mar 18 13:31:38.673802 master-0 kubenswrapper[28504]: I0318 13:31:38.673804 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" event={"ID":"3fb697e8-6f9d-4da2-a8dd-e16382be11df","Type":"ContainerDied","Data":"34280e25760ab6e706198b7404933869ccb794e32ccc448f2cb2690139535056"} Mar 18 13:31:40.125689 master-0 kubenswrapper[28504]: I0318 13:31:40.125632 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" Mar 18 13:31:40.286810 master-0 kubenswrapper[28504]: I0318 13:31:40.286736 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzcbv\" (UniqueName: \"kubernetes.io/projected/3fb697e8-6f9d-4da2-a8dd-e16382be11df-kube-api-access-kzcbv\") pod \"3fb697e8-6f9d-4da2-a8dd-e16382be11df\" (UID: \"3fb697e8-6f9d-4da2-a8dd-e16382be11df\") " Mar 18 13:31:40.287068 master-0 kubenswrapper[28504]: I0318 13:31:40.286955 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fb697e8-6f9d-4da2-a8dd-e16382be11df-bundle\") pod \"3fb697e8-6f9d-4da2-a8dd-e16382be11df\" (UID: \"3fb697e8-6f9d-4da2-a8dd-e16382be11df\") " Mar 18 13:31:40.287068 master-0 kubenswrapper[28504]: I0318 13:31:40.287003 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fb697e8-6f9d-4da2-a8dd-e16382be11df-util\") pod \"3fb697e8-6f9d-4da2-a8dd-e16382be11df\" (UID: \"3fb697e8-6f9d-4da2-a8dd-e16382be11df\") " Mar 18 13:31:40.289378 master-0 kubenswrapper[28504]: I0318 13:31:40.289328 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fb697e8-6f9d-4da2-a8dd-e16382be11df-bundle" (OuterVolumeSpecName: "bundle") pod "3fb697e8-6f9d-4da2-a8dd-e16382be11df" (UID: "3fb697e8-6f9d-4da2-a8dd-e16382be11df"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:31:40.292203 master-0 kubenswrapper[28504]: I0318 13:31:40.292161 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fb697e8-6f9d-4da2-a8dd-e16382be11df-kube-api-access-kzcbv" (OuterVolumeSpecName: "kube-api-access-kzcbv") pod "3fb697e8-6f9d-4da2-a8dd-e16382be11df" (UID: "3fb697e8-6f9d-4da2-a8dd-e16382be11df"). InnerVolumeSpecName "kube-api-access-kzcbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:31:40.300722 master-0 kubenswrapper[28504]: I0318 13:31:40.300602 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fb697e8-6f9d-4da2-a8dd-e16382be11df-util" (OuterVolumeSpecName: "util") pod "3fb697e8-6f9d-4da2-a8dd-e16382be11df" (UID: "3fb697e8-6f9d-4da2-a8dd-e16382be11df"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:31:40.389051 master-0 kubenswrapper[28504]: I0318 13:31:40.388197 28504 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fb697e8-6f9d-4da2-a8dd-e16382be11df-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:40.389051 master-0 kubenswrapper[28504]: I0318 13:31:40.388242 28504 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fb697e8-6f9d-4da2-a8dd-e16382be11df-util\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:40.389051 master-0 kubenswrapper[28504]: I0318 13:31:40.388255 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzcbv\" (UniqueName: \"kubernetes.io/projected/3fb697e8-6f9d-4da2-a8dd-e16382be11df-kube-api-access-kzcbv\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:40.723965 master-0 kubenswrapper[28504]: I0318 13:31:40.721226 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" event={"ID":"3fb697e8-6f9d-4da2-a8dd-e16382be11df","Type":"ContainerDied","Data":"5b76e16fb9c966321e4a601f8509faa43f185c3f8cd133636aa10fdef78add38"} Mar 18 13:31:40.723965 master-0 kubenswrapper[28504]: I0318 13:31:40.721288 28504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b76e16fb9c966321e4a601f8509faa43f185c3f8cd133636aa10fdef78add38" Mar 18 13:31:40.723965 master-0 kubenswrapper[28504]: I0318 13:31:40.721385 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726wbzq9" Mar 18 13:31:42.743003 master-0 kubenswrapper[28504]: I0318 13:31:42.742924 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-q9l29" event={"ID":"58479c1b-d11f-4cfa-940d-5020366c3f41","Type":"ContainerStarted","Data":"9d492ae9a9e9227711ab187a50550a9968b35b870a793454023adf8601705579"} Mar 18 13:31:42.786961 master-0 kubenswrapper[28504]: I0318 13:31:42.786398 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-q9l29" podStartSLOduration=2.3640283 podStartE2EDuration="6.786371224s" podCreationTimestamp="2026-03-18 13:31:36 +0000 UTC" firstStartedPulling="2026-03-18 13:31:37.58869204 +0000 UTC m=+475.083497815" lastFinishedPulling="2026-03-18 13:31:42.011034964 +0000 UTC m=+479.505840739" observedRunningTime="2026-03-18 13:31:42.770932088 +0000 UTC m=+480.265737873" watchObservedRunningTime="2026-03-18 13:31:42.786371224 +0000 UTC m=+480.281176999" Mar 18 13:31:45.631671 master-0 kubenswrapper[28504]: I0318 13:31:45.631597 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-lfgvr"] Mar 18 13:31:45.632469 master-0 kubenswrapper[28504]: E0318 13:31:45.631953 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fb697e8-6f9d-4da2-a8dd-e16382be11df" containerName="util" Mar 18 13:31:45.632469 master-0 kubenswrapper[28504]: I0318 13:31:45.631967 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fb697e8-6f9d-4da2-a8dd-e16382be11df" containerName="util" Mar 18 13:31:45.632469 master-0 kubenswrapper[28504]: E0318 13:31:45.631986 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fb697e8-6f9d-4da2-a8dd-e16382be11df" containerName="extract" Mar 18 13:31:45.632469 master-0 kubenswrapper[28504]: I0318 13:31:45.631993 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fb697e8-6f9d-4da2-a8dd-e16382be11df" containerName="extract" Mar 18 13:31:45.632469 master-0 kubenswrapper[28504]: E0318 13:31:45.632009 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fb697e8-6f9d-4da2-a8dd-e16382be11df" containerName="pull" Mar 18 13:31:45.632469 master-0 kubenswrapper[28504]: I0318 13:31:45.632015 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fb697e8-6f9d-4da2-a8dd-e16382be11df" containerName="pull" Mar 18 13:31:45.632469 master-0 kubenswrapper[28504]: I0318 13:31:45.632149 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fb697e8-6f9d-4da2-a8dd-e16382be11df" containerName="extract" Mar 18 13:31:45.632793 master-0 kubenswrapper[28504]: I0318 13:31:45.632764 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-lfgvr" Mar 18 13:31:45.636760 master-0 kubenswrapper[28504]: I0318 13:31:45.636695 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 18 13:31:45.640636 master-0 kubenswrapper[28504]: I0318 13:31:45.637359 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 18 13:31:45.649241 master-0 kubenswrapper[28504]: I0318 13:31:45.649182 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtt7c\" (UniqueName: \"kubernetes.io/projected/8df495a4-1112-45d4-8e9d-fc8b9395c7b6-kube-api-access-jtt7c\") pod \"cert-manager-webhook-6888856db4-lfgvr\" (UID: \"8df495a4-1112-45d4-8e9d-fc8b9395c7b6\") " pod="cert-manager/cert-manager-webhook-6888856db4-lfgvr" Mar 18 13:31:45.649378 master-0 kubenswrapper[28504]: I0318 13:31:45.649297 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8df495a4-1112-45d4-8e9d-fc8b9395c7b6-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-lfgvr\" (UID: \"8df495a4-1112-45d4-8e9d-fc8b9395c7b6\") " pod="cert-manager/cert-manager-webhook-6888856db4-lfgvr" Mar 18 13:31:45.689024 master-0 kubenswrapper[28504]: I0318 13:31:45.688903 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-lfgvr"] Mar 18 13:31:45.758074 master-0 kubenswrapper[28504]: I0318 13:31:45.752265 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8df495a4-1112-45d4-8e9d-fc8b9395c7b6-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-lfgvr\" (UID: \"8df495a4-1112-45d4-8e9d-fc8b9395c7b6\") " pod="cert-manager/cert-manager-webhook-6888856db4-lfgvr" Mar 18 13:31:45.758074 master-0 kubenswrapper[28504]: I0318 13:31:45.752441 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtt7c\" (UniqueName: \"kubernetes.io/projected/8df495a4-1112-45d4-8e9d-fc8b9395c7b6-kube-api-access-jtt7c\") pod \"cert-manager-webhook-6888856db4-lfgvr\" (UID: \"8df495a4-1112-45d4-8e9d-fc8b9395c7b6\") " pod="cert-manager/cert-manager-webhook-6888856db4-lfgvr" Mar 18 13:31:45.781739 master-0 kubenswrapper[28504]: I0318 13:31:45.781693 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8df495a4-1112-45d4-8e9d-fc8b9395c7b6-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-lfgvr\" (UID: \"8df495a4-1112-45d4-8e9d-fc8b9395c7b6\") " pod="cert-manager/cert-manager-webhook-6888856db4-lfgvr" Mar 18 13:31:45.795434 master-0 kubenswrapper[28504]: I0318 13:31:45.795383 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtt7c\" (UniqueName: \"kubernetes.io/projected/8df495a4-1112-45d4-8e9d-fc8b9395c7b6-kube-api-access-jtt7c\") pod \"cert-manager-webhook-6888856db4-lfgvr\" (UID: \"8df495a4-1112-45d4-8e9d-fc8b9395c7b6\") " pod="cert-manager/cert-manager-webhook-6888856db4-lfgvr" Mar 18 13:31:45.988084 master-0 kubenswrapper[28504]: I0318 13:31:45.988001 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-lfgvr" Mar 18 13:31:46.509010 master-0 kubenswrapper[28504]: W0318 13:31:46.508957 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8df495a4_1112_45d4_8e9d_fc8b9395c7b6.slice/crio-d08fbcbf0b12bb683fcb1ea2c01a904491c9e401aca9eb54a41eb2686c6bc562 WatchSource:0}: Error finding container d08fbcbf0b12bb683fcb1ea2c01a904491c9e401aca9eb54a41eb2686c6bc562: Status 404 returned error can't find the container with id d08fbcbf0b12bb683fcb1ea2c01a904491c9e401aca9eb54a41eb2686c6bc562 Mar 18 13:31:46.509359 master-0 kubenswrapper[28504]: I0318 13:31:46.509310 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-lfgvr"] Mar 18 13:31:46.880918 master-0 kubenswrapper[28504]: I0318 13:31:46.880773 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-lfgvr" event={"ID":"8df495a4-1112-45d4-8e9d-fc8b9395c7b6","Type":"ContainerStarted","Data":"d08fbcbf0b12bb683fcb1ea2c01a904491c9e401aca9eb54a41eb2686c6bc562"} Mar 18 13:31:49.399680 master-0 kubenswrapper[28504]: I0318 13:31:49.399612 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-sb4gj"] Mar 18 13:31:49.401200 master-0 kubenswrapper[28504]: I0318 13:31:49.401171 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-sb4gj" Mar 18 13:31:49.421010 master-0 kubenswrapper[28504]: I0318 13:31:49.419464 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-sb4gj"] Mar 18 13:31:49.520082 master-0 kubenswrapper[28504]: I0318 13:31:49.517861 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-kx4rd"] Mar 18 13:31:49.520082 master-0 kubenswrapper[28504]: I0318 13:31:49.519810 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-kx4rd" Mar 18 13:31:49.523964 master-0 kubenswrapper[28504]: I0318 13:31:49.523535 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 18 13:31:49.523964 master-0 kubenswrapper[28504]: I0318 13:31:49.523895 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 18 13:31:49.544501 master-0 kubenswrapper[28504]: I0318 13:31:49.544419 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ec14c15-e45e-4eb1-b495-31807f7a691e-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-sb4gj\" (UID: \"3ec14c15-e45e-4eb1-b495-31807f7a691e\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sb4gj" Mar 18 13:31:49.544701 master-0 kubenswrapper[28504]: I0318 13:31:49.544515 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcg2n\" (UniqueName: \"kubernetes.io/projected/3ec14c15-e45e-4eb1-b495-31807f7a691e-kube-api-access-wcg2n\") pod \"cert-manager-cainjector-5545bd876-sb4gj\" (UID: \"3ec14c15-e45e-4eb1-b495-31807f7a691e\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sb4gj" Mar 18 13:31:49.585975 master-0 kubenswrapper[28504]: I0318 13:31:49.578903 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-kx4rd"] Mar 18 13:31:49.647855 master-0 kubenswrapper[28504]: I0318 13:31:49.647780 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fqxw\" (UniqueName: \"kubernetes.io/projected/9b690cda-08f0-4606-a18f-a1be217b5037-kube-api-access-7fqxw\") pod \"nmstate-operator-796d4cfff4-kx4rd\" (UID: \"9b690cda-08f0-4606-a18f-a1be217b5037\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-kx4rd" Mar 18 13:31:49.647855 master-0 kubenswrapper[28504]: I0318 13:31:49.647846 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ec14c15-e45e-4eb1-b495-31807f7a691e-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-sb4gj\" (UID: \"3ec14c15-e45e-4eb1-b495-31807f7a691e\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sb4gj" Mar 18 13:31:49.648131 master-0 kubenswrapper[28504]: I0318 13:31:49.647895 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcg2n\" (UniqueName: \"kubernetes.io/projected/3ec14c15-e45e-4eb1-b495-31807f7a691e-kube-api-access-wcg2n\") pod \"cert-manager-cainjector-5545bd876-sb4gj\" (UID: \"3ec14c15-e45e-4eb1-b495-31807f7a691e\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sb4gj" Mar 18 13:31:49.704188 master-0 kubenswrapper[28504]: I0318 13:31:49.704131 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ec14c15-e45e-4eb1-b495-31807f7a691e-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-sb4gj\" (UID: \"3ec14c15-e45e-4eb1-b495-31807f7a691e\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sb4gj" Mar 18 13:31:49.710804 master-0 kubenswrapper[28504]: I0318 13:31:49.705965 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcg2n\" (UniqueName: \"kubernetes.io/projected/3ec14c15-e45e-4eb1-b495-31807f7a691e-kube-api-access-wcg2n\") pod \"cert-manager-cainjector-5545bd876-sb4gj\" (UID: \"3ec14c15-e45e-4eb1-b495-31807f7a691e\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sb4gj" Mar 18 13:31:49.745047 master-0 kubenswrapper[28504]: I0318 13:31:49.744435 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-sb4gj" Mar 18 13:31:49.750050 master-0 kubenswrapper[28504]: I0318 13:31:49.749781 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fqxw\" (UniqueName: \"kubernetes.io/projected/9b690cda-08f0-4606-a18f-a1be217b5037-kube-api-access-7fqxw\") pod \"nmstate-operator-796d4cfff4-kx4rd\" (UID: \"9b690cda-08f0-4606-a18f-a1be217b5037\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-kx4rd" Mar 18 13:31:49.767475 master-0 kubenswrapper[28504]: I0318 13:31:49.767402 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fqxw\" (UniqueName: \"kubernetes.io/projected/9b690cda-08f0-4606-a18f-a1be217b5037-kube-api-access-7fqxw\") pod \"nmstate-operator-796d4cfff4-kx4rd\" (UID: \"9b690cda-08f0-4606-a18f-a1be217b5037\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-kx4rd" Mar 18 13:31:49.891888 master-0 kubenswrapper[28504]: I0318 13:31:49.887284 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-kx4rd" Mar 18 13:31:50.203691 master-0 kubenswrapper[28504]: I0318 13:31:50.200988 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-sb4gj"] Mar 18 13:31:50.369786 master-0 kubenswrapper[28504]: I0318 13:31:50.368413 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-kx4rd"] Mar 18 13:31:50.375908 master-0 kubenswrapper[28504]: W0318 13:31:50.375836 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b690cda_08f0_4606_a18f_a1be217b5037.slice/crio-ce9c053b1ad662a5e5a77f9987847dc490a6c116045133a3a232e58cffbaf7a2 WatchSource:0}: Error finding container ce9c053b1ad662a5e5a77f9987847dc490a6c116045133a3a232e58cffbaf7a2: Status 404 returned error can't find the container with id ce9c053b1ad662a5e5a77f9987847dc490a6c116045133a3a232e58cffbaf7a2 Mar 18 13:31:50.913108 master-0 kubenswrapper[28504]: I0318 13:31:50.913030 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-kx4rd" event={"ID":"9b690cda-08f0-4606-a18f-a1be217b5037","Type":"ContainerStarted","Data":"ce9c053b1ad662a5e5a77f9987847dc490a6c116045133a3a232e58cffbaf7a2"} Mar 18 13:31:50.915292 master-0 kubenswrapper[28504]: I0318 13:31:50.915246 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-sb4gj" event={"ID":"3ec14c15-e45e-4eb1-b495-31807f7a691e","Type":"ContainerStarted","Data":"eeb53ed82f3bb75528c292738a07c93fc552ac469271860782bc994b8bb2989b"} Mar 18 13:31:53.956079 master-0 kubenswrapper[28504]: I0318 13:31:53.955988 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-lfgvr" event={"ID":"8df495a4-1112-45d4-8e9d-fc8b9395c7b6","Type":"ContainerStarted","Data":"a863b1714d876d51f1e01bf12cf663bccebd71c2fe88ed481e5c48b8f418db76"} Mar 18 13:31:53.956689 master-0 kubenswrapper[28504]: I0318 13:31:53.956265 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-lfgvr" Mar 18 13:31:53.958477 master-0 kubenswrapper[28504]: I0318 13:31:53.958432 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-sb4gj" event={"ID":"3ec14c15-e45e-4eb1-b495-31807f7a691e","Type":"ContainerStarted","Data":"7ea83093c999bf48dee69020e71e4666cb7d7cf06e28385a9514ecc8e5b70140"} Mar 18 13:31:53.993507 master-0 kubenswrapper[28504]: I0318 13:31:53.993426 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-lfgvr" podStartSLOduration=2.507108016 podStartE2EDuration="8.993406943s" podCreationTimestamp="2026-03-18 13:31:45 +0000 UTC" firstStartedPulling="2026-03-18 13:31:46.511385163 +0000 UTC m=+484.006190938" lastFinishedPulling="2026-03-18 13:31:52.99768409 +0000 UTC m=+490.492489865" observedRunningTime="2026-03-18 13:31:53.989016809 +0000 UTC m=+491.483822604" watchObservedRunningTime="2026-03-18 13:31:53.993406943 +0000 UTC m=+491.488212738" Mar 18 13:31:54.064676 master-0 kubenswrapper[28504]: I0318 13:31:54.064263 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-sb4gj" podStartSLOduration=2.281449128 podStartE2EDuration="5.064239546s" podCreationTimestamp="2026-03-18 13:31:49 +0000 UTC" firstStartedPulling="2026-03-18 13:31:50.222864968 +0000 UTC m=+487.717670763" lastFinishedPulling="2026-03-18 13:31:53.005655406 +0000 UTC m=+490.500461181" observedRunningTime="2026-03-18 13:31:54.060912582 +0000 UTC m=+491.555718357" watchObservedRunningTime="2026-03-18 13:31:54.064239546 +0000 UTC m=+491.559045321" Mar 18 13:31:55.390921 master-0 kubenswrapper[28504]: I0318 13:31:55.389335 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-hsjwr"] Mar 18 13:31:55.396274 master-0 kubenswrapper[28504]: I0318 13:31:55.396216 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-hsjwr" Mar 18 13:31:55.404865 master-0 kubenswrapper[28504]: I0318 13:31:55.404808 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clshm\" (UniqueName: \"kubernetes.io/projected/b0cb1744-6db9-401b-8d24-a9187582cdf8-kube-api-access-clshm\") pod \"cert-manager-545d4d4674-hsjwr\" (UID: \"b0cb1744-6db9-401b-8d24-a9187582cdf8\") " pod="cert-manager/cert-manager-545d4d4674-hsjwr" Mar 18 13:31:55.405262 master-0 kubenswrapper[28504]: I0318 13:31:55.404887 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0cb1744-6db9-401b-8d24-a9187582cdf8-bound-sa-token\") pod \"cert-manager-545d4d4674-hsjwr\" (UID: \"b0cb1744-6db9-401b-8d24-a9187582cdf8\") " pod="cert-manager/cert-manager-545d4d4674-hsjwr" Mar 18 13:31:55.421847 master-0 kubenswrapper[28504]: I0318 13:31:55.421367 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-hsjwr"] Mar 18 13:31:55.514965 master-0 kubenswrapper[28504]: I0318 13:31:55.506008 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clshm\" (UniqueName: \"kubernetes.io/projected/b0cb1744-6db9-401b-8d24-a9187582cdf8-kube-api-access-clshm\") pod \"cert-manager-545d4d4674-hsjwr\" (UID: \"b0cb1744-6db9-401b-8d24-a9187582cdf8\") " pod="cert-manager/cert-manager-545d4d4674-hsjwr" Mar 18 13:31:55.514965 master-0 kubenswrapper[28504]: I0318 13:31:55.506083 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0cb1744-6db9-401b-8d24-a9187582cdf8-bound-sa-token\") pod \"cert-manager-545d4d4674-hsjwr\" (UID: \"b0cb1744-6db9-401b-8d24-a9187582cdf8\") " pod="cert-manager/cert-manager-545d4d4674-hsjwr" Mar 18 13:31:55.550969 master-0 kubenswrapper[28504]: I0318 13:31:55.544612 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0cb1744-6db9-401b-8d24-a9187582cdf8-bound-sa-token\") pod \"cert-manager-545d4d4674-hsjwr\" (UID: \"b0cb1744-6db9-401b-8d24-a9187582cdf8\") " pod="cert-manager/cert-manager-545d4d4674-hsjwr" Mar 18 13:31:55.550969 master-0 kubenswrapper[28504]: I0318 13:31:55.546325 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clshm\" (UniqueName: \"kubernetes.io/projected/b0cb1744-6db9-401b-8d24-a9187582cdf8-kube-api-access-clshm\") pod \"cert-manager-545d4d4674-hsjwr\" (UID: \"b0cb1744-6db9-401b-8d24-a9187582cdf8\") " pod="cert-manager/cert-manager-545d4d4674-hsjwr" Mar 18 13:31:55.748964 master-0 kubenswrapper[28504]: I0318 13:31:55.748391 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-hsjwr" Mar 18 13:31:56.240012 master-0 kubenswrapper[28504]: I0318 13:31:56.224835 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-c6675654-f8zcx"] Mar 18 13:31:56.249648 master-0 kubenswrapper[28504]: I0318 13:31:56.249584 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-c6675654-f8zcx"] Mar 18 13:31:56.249857 master-0 kubenswrapper[28504]: I0318 13:31:56.249712 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-c6675654-f8zcx" Mar 18 13:31:56.266710 master-0 kubenswrapper[28504]: I0318 13:31:56.266637 28504 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 18 13:31:56.266963 master-0 kubenswrapper[28504]: I0318 13:31:56.266868 28504 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 18 13:31:56.267195 master-0 kubenswrapper[28504]: I0318 13:31:56.267063 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 18 13:31:56.267195 master-0 kubenswrapper[28504]: I0318 13:31:56.267195 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 18 13:31:56.347882 master-0 kubenswrapper[28504]: I0318 13:31:56.347805 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lz8z\" (UniqueName: \"kubernetes.io/projected/4f488544-10c9-4e31-b183-60eb24cd6593-kube-api-access-4lz8z\") pod \"metallb-operator-controller-manager-c6675654-f8zcx\" (UID: \"4f488544-10c9-4e31-b183-60eb24cd6593\") " pod="metallb-system/metallb-operator-controller-manager-c6675654-f8zcx" Mar 18 13:31:56.347882 master-0 kubenswrapper[28504]: I0318 13:31:56.347877 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f488544-10c9-4e31-b183-60eb24cd6593-webhook-cert\") pod \"metallb-operator-controller-manager-c6675654-f8zcx\" (UID: \"4f488544-10c9-4e31-b183-60eb24cd6593\") " pod="metallb-system/metallb-operator-controller-manager-c6675654-f8zcx" Mar 18 13:31:56.348289 master-0 kubenswrapper[28504]: I0318 13:31:56.347981 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f488544-10c9-4e31-b183-60eb24cd6593-apiservice-cert\") pod \"metallb-operator-controller-manager-c6675654-f8zcx\" (UID: \"4f488544-10c9-4e31-b183-60eb24cd6593\") " pod="metallb-system/metallb-operator-controller-manager-c6675654-f8zcx" Mar 18 13:31:56.457041 master-0 kubenswrapper[28504]: I0318 13:31:56.454323 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lz8z\" (UniqueName: \"kubernetes.io/projected/4f488544-10c9-4e31-b183-60eb24cd6593-kube-api-access-4lz8z\") pod \"metallb-operator-controller-manager-c6675654-f8zcx\" (UID: \"4f488544-10c9-4e31-b183-60eb24cd6593\") " pod="metallb-system/metallb-operator-controller-manager-c6675654-f8zcx" Mar 18 13:31:56.457041 master-0 kubenswrapper[28504]: I0318 13:31:56.454466 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f488544-10c9-4e31-b183-60eb24cd6593-webhook-cert\") pod \"metallb-operator-controller-manager-c6675654-f8zcx\" (UID: \"4f488544-10c9-4e31-b183-60eb24cd6593\") " pod="metallb-system/metallb-operator-controller-manager-c6675654-f8zcx" Mar 18 13:31:56.457041 master-0 kubenswrapper[28504]: I0318 13:31:56.454559 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f488544-10c9-4e31-b183-60eb24cd6593-apiservice-cert\") pod \"metallb-operator-controller-manager-c6675654-f8zcx\" (UID: \"4f488544-10c9-4e31-b183-60eb24cd6593\") " pod="metallb-system/metallb-operator-controller-manager-c6675654-f8zcx" Mar 18 13:31:56.463196 master-0 kubenswrapper[28504]: I0318 13:31:56.459948 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f488544-10c9-4e31-b183-60eb24cd6593-apiservice-cert\") pod \"metallb-operator-controller-manager-c6675654-f8zcx\" (UID: \"4f488544-10c9-4e31-b183-60eb24cd6593\") " pod="metallb-system/metallb-operator-controller-manager-c6675654-f8zcx" Mar 18 13:31:56.463196 master-0 kubenswrapper[28504]: I0318 13:31:56.460101 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f488544-10c9-4e31-b183-60eb24cd6593-webhook-cert\") pod \"metallb-operator-controller-manager-c6675654-f8zcx\" (UID: \"4f488544-10c9-4e31-b183-60eb24cd6593\") " pod="metallb-system/metallb-operator-controller-manager-c6675654-f8zcx" Mar 18 13:31:56.496213 master-0 kubenswrapper[28504]: I0318 13:31:56.496104 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lz8z\" (UniqueName: \"kubernetes.io/projected/4f488544-10c9-4e31-b183-60eb24cd6593-kube-api-access-4lz8z\") pod \"metallb-operator-controller-manager-c6675654-f8zcx\" (UID: \"4f488544-10c9-4e31-b183-60eb24cd6593\") " pod="metallb-system/metallb-operator-controller-manager-c6675654-f8zcx" Mar 18 13:31:56.605967 master-0 kubenswrapper[28504]: I0318 13:31:56.605583 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-hsjwr"] Mar 18 13:31:56.654390 master-0 kubenswrapper[28504]: I0318 13:31:56.646760 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj"] Mar 18 13:31:56.654390 master-0 kubenswrapper[28504]: I0318 13:31:56.646791 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-c6675654-f8zcx" Mar 18 13:31:56.654390 master-0 kubenswrapper[28504]: I0318 13:31:56.648246 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj" Mar 18 13:31:56.654390 master-0 kubenswrapper[28504]: I0318 13:31:56.653637 28504 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 18 13:31:56.700146 master-0 kubenswrapper[28504]: I0318 13:31:56.656925 28504 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 18 13:31:56.700146 master-0 kubenswrapper[28504]: I0318 13:31:56.663332 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj"] Mar 18 13:31:56.766624 master-0 kubenswrapper[28504]: I0318 13:31:56.766292 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-275v8\" (UniqueName: \"kubernetes.io/projected/133f045a-3c88-4373-84e8-55217f947865-kube-api-access-275v8\") pod \"metallb-operator-webhook-server-67655b5bb9-s6lrj\" (UID: \"133f045a-3c88-4373-84e8-55217f947865\") " pod="metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj" Mar 18 13:31:56.766624 master-0 kubenswrapper[28504]: I0318 13:31:56.766402 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/133f045a-3c88-4373-84e8-55217f947865-apiservice-cert\") pod \"metallb-operator-webhook-server-67655b5bb9-s6lrj\" (UID: \"133f045a-3c88-4373-84e8-55217f947865\") " pod="metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj" Mar 18 13:31:56.766624 master-0 kubenswrapper[28504]: I0318 13:31:56.766489 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/133f045a-3c88-4373-84e8-55217f947865-webhook-cert\") pod \"metallb-operator-webhook-server-67655b5bb9-s6lrj\" (UID: \"133f045a-3c88-4373-84e8-55217f947865\") " pod="metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj" Mar 18 13:31:56.868296 master-0 kubenswrapper[28504]: I0318 13:31:56.868230 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-275v8\" (UniqueName: \"kubernetes.io/projected/133f045a-3c88-4373-84e8-55217f947865-kube-api-access-275v8\") pod \"metallb-operator-webhook-server-67655b5bb9-s6lrj\" (UID: \"133f045a-3c88-4373-84e8-55217f947865\") " pod="metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj" Mar 18 13:31:56.869121 master-0 kubenswrapper[28504]: I0318 13:31:56.869090 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/133f045a-3c88-4373-84e8-55217f947865-apiservice-cert\") pod \"metallb-operator-webhook-server-67655b5bb9-s6lrj\" (UID: \"133f045a-3c88-4373-84e8-55217f947865\") " pod="metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj" Mar 18 13:31:56.869217 master-0 kubenswrapper[28504]: I0318 13:31:56.869196 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/133f045a-3c88-4373-84e8-55217f947865-webhook-cert\") pod \"metallb-operator-webhook-server-67655b5bb9-s6lrj\" (UID: \"133f045a-3c88-4373-84e8-55217f947865\") " pod="metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj" Mar 18 13:31:56.873987 master-0 kubenswrapper[28504]: I0318 13:31:56.873301 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/133f045a-3c88-4373-84e8-55217f947865-webhook-cert\") pod \"metallb-operator-webhook-server-67655b5bb9-s6lrj\" (UID: \"133f045a-3c88-4373-84e8-55217f947865\") " pod="metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj" Mar 18 13:31:56.883469 master-0 kubenswrapper[28504]: I0318 13:31:56.882875 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/133f045a-3c88-4373-84e8-55217f947865-apiservice-cert\") pod \"metallb-operator-webhook-server-67655b5bb9-s6lrj\" (UID: \"133f045a-3c88-4373-84e8-55217f947865\") " pod="metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj" Mar 18 13:31:56.906853 master-0 kubenswrapper[28504]: I0318 13:31:56.906796 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-275v8\" (UniqueName: \"kubernetes.io/projected/133f045a-3c88-4373-84e8-55217f947865-kube-api-access-275v8\") pod \"metallb-operator-webhook-server-67655b5bb9-s6lrj\" (UID: \"133f045a-3c88-4373-84e8-55217f947865\") " pod="metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj" Mar 18 13:31:57.044758 master-0 kubenswrapper[28504]: I0318 13:31:57.044637 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-hsjwr" event={"ID":"b0cb1744-6db9-401b-8d24-a9187582cdf8","Type":"ContainerStarted","Data":"f2f3c914cbac935c83758352b8dcdc802a8087142b0d256de7e0730b15c36b8c"} Mar 18 13:31:57.096807 master-0 kubenswrapper[28504]: I0318 13:31:57.096732 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj" Mar 18 13:31:58.978276 master-0 kubenswrapper[28504]: I0318 13:31:58.976703 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-c6675654-f8zcx"] Mar 18 13:31:58.995349 master-0 kubenswrapper[28504]: W0318 13:31:58.991416 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f488544_10c9_4e31_b183_60eb24cd6593.slice/crio-0e27f968d106437832149492f2b5453e0b9f39e3f21f5d1c7065ba7ddb9bd7be WatchSource:0}: Error finding container 0e27f968d106437832149492f2b5453e0b9f39e3f21f5d1c7065ba7ddb9bd7be: Status 404 returned error can't find the container with id 0e27f968d106437832149492f2b5453e0b9f39e3f21f5d1c7065ba7ddb9bd7be Mar 18 13:31:58.995349 master-0 kubenswrapper[28504]: I0318 13:31:58.994291 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj"] Mar 18 13:31:59.008552 master-0 kubenswrapper[28504]: W0318 13:31:59.007647 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod133f045a_3c88_4373_84e8_55217f947865.slice/crio-29bf3dfd9d646bdb7fd3e4c7744af9c3dd12779033916bc305e6dba6bb2d1afe WatchSource:0}: Error finding container 29bf3dfd9d646bdb7fd3e4c7744af9c3dd12779033916bc305e6dba6bb2d1afe: Status 404 returned error can't find the container with id 29bf3dfd9d646bdb7fd3e4c7744af9c3dd12779033916bc305e6dba6bb2d1afe Mar 18 13:31:59.067239 master-0 kubenswrapper[28504]: I0318 13:31:59.067168 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-hsjwr" event={"ID":"b0cb1744-6db9-401b-8d24-a9187582cdf8","Type":"ContainerStarted","Data":"ff8b58769b24f7f19280d923a3c0f8724cc9b6f50e54c0676bbc8f124d2b6ccc"} Mar 18 13:31:59.078918 master-0 kubenswrapper[28504]: I0318 13:31:59.076950 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-kx4rd" event={"ID":"9b690cda-08f0-4606-a18f-a1be217b5037","Type":"ContainerStarted","Data":"a00a610f2bcb03daec791f721dc793a997e105a65e645dbb24f4351c3330c442"} Mar 18 13:31:59.086429 master-0 kubenswrapper[28504]: I0318 13:31:59.086327 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-c6675654-f8zcx" event={"ID":"4f488544-10c9-4e31-b183-60eb24cd6593","Type":"ContainerStarted","Data":"0e27f968d106437832149492f2b5453e0b9f39e3f21f5d1c7065ba7ddb9bd7be"} Mar 18 13:31:59.093505 master-0 kubenswrapper[28504]: I0318 13:31:59.093324 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj" event={"ID":"133f045a-3c88-4373-84e8-55217f947865","Type":"ContainerStarted","Data":"29bf3dfd9d646bdb7fd3e4c7744af9c3dd12779033916bc305e6dba6bb2d1afe"} Mar 18 13:31:59.116202 master-0 kubenswrapper[28504]: I0318 13:31:59.116116 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-hsjwr" podStartSLOduration=4.116089678 podStartE2EDuration="4.116089678s" podCreationTimestamp="2026-03-18 13:31:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:31:59.102301318 +0000 UTC m=+496.597107103" watchObservedRunningTime="2026-03-18 13:31:59.116089678 +0000 UTC m=+496.610895453" Mar 18 13:31:59.259863 master-0 kubenswrapper[28504]: I0318 13:31:59.259722 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-kx4rd" podStartSLOduration=1.906234859 podStartE2EDuration="10.259701248s" podCreationTimestamp="2026-03-18 13:31:49 +0000 UTC" firstStartedPulling="2026-03-18 13:31:50.378640972 +0000 UTC m=+487.873446747" lastFinishedPulling="2026-03-18 13:31:58.732107351 +0000 UTC m=+496.226913136" observedRunningTime="2026-03-18 13:31:59.202773469 +0000 UTC m=+496.697579254" watchObservedRunningTime="2026-03-18 13:31:59.259701248 +0000 UTC m=+496.754507023" Mar 18 13:32:00.990903 master-0 kubenswrapper[28504]: I0318 13:32:00.990839 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-lfgvr" Mar 18 13:32:06.454678 master-0 kubenswrapper[28504]: I0318 13:32:06.454613 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-4chrl"] Mar 18 13:32:06.457159 master-0 kubenswrapper[28504]: I0318 13:32:06.455901 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-4chrl" Mar 18 13:32:06.460059 master-0 kubenswrapper[28504]: I0318 13:32:06.459966 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 18 13:32:06.461591 master-0 kubenswrapper[28504]: I0318 13:32:06.461542 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 18 13:32:06.483306 master-0 kubenswrapper[28504]: I0318 13:32:06.483258 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-4chrl"] Mar 18 13:32:06.566976 master-0 kubenswrapper[28504]: I0318 13:32:06.566909 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pdbt\" (UniqueName: \"kubernetes.io/projected/1ef6fd38-0021-4460-be7d-eb73d64f4d71-kube-api-access-5pdbt\") pod \"obo-prometheus-operator-8ff7d675-4chrl\" (UID: \"1ef6fd38-0021-4460-be7d-eb73d64f4d71\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-4chrl" Mar 18 13:32:06.674631 master-0 kubenswrapper[28504]: I0318 13:32:06.674019 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pdbt\" (UniqueName: \"kubernetes.io/projected/1ef6fd38-0021-4460-be7d-eb73d64f4d71-kube-api-access-5pdbt\") pod \"obo-prometheus-operator-8ff7d675-4chrl\" (UID: \"1ef6fd38-0021-4460-be7d-eb73d64f4d71\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-4chrl" Mar 18 13:32:06.725021 master-0 kubenswrapper[28504]: I0318 13:32:06.712670 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pdbt\" (UniqueName: \"kubernetes.io/projected/1ef6fd38-0021-4460-be7d-eb73d64f4d71-kube-api-access-5pdbt\") pod \"obo-prometheus-operator-8ff7d675-4chrl\" (UID: \"1ef6fd38-0021-4460-be7d-eb73d64f4d71\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-4chrl" Mar 18 13:32:06.799012 master-0 kubenswrapper[28504]: I0318 13:32:06.798924 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-4chrl" Mar 18 13:32:07.077746 master-0 kubenswrapper[28504]: I0318 13:32:07.077617 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx"] Mar 18 13:32:07.082988 master-0 kubenswrapper[28504]: I0318 13:32:07.082809 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx" Mar 18 13:32:07.087984 master-0 kubenswrapper[28504]: I0318 13:32:07.087928 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 18 13:32:07.097549 master-0 kubenswrapper[28504]: I0318 13:32:07.097430 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx"] Mar 18 13:32:07.105796 master-0 kubenswrapper[28504]: I0318 13:32:07.105746 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-74qzp"] Mar 18 13:32:07.107109 master-0 kubenswrapper[28504]: I0318 13:32:07.107082 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-74qzp" Mar 18 13:32:07.192457 master-0 kubenswrapper[28504]: I0318 13:32:07.191784 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ae6119a-3e75-4646-8461-44837271a5c4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67946df4bf-74qzp\" (UID: \"5ae6119a-3e75-4646-8461-44837271a5c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-74qzp" Mar 18 13:32:07.192457 master-0 kubenswrapper[28504]: I0318 13:32:07.191875 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3b5d185-c320-460f-8a39-0996af3acc72-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx\" (UID: \"c3b5d185-c320-460f-8a39-0996af3acc72\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx" Mar 18 13:32:07.192457 master-0 kubenswrapper[28504]: I0318 13:32:07.191906 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3b5d185-c320-460f-8a39-0996af3acc72-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx\" (UID: \"c3b5d185-c320-460f-8a39-0996af3acc72\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx" Mar 18 13:32:07.192457 master-0 kubenswrapper[28504]: I0318 13:32:07.191950 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ae6119a-3e75-4646-8461-44837271a5c4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67946df4bf-74qzp\" (UID: \"5ae6119a-3e75-4646-8461-44837271a5c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-74qzp" Mar 18 13:32:07.221101 master-0 kubenswrapper[28504]: I0318 13:32:07.215608 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-74qzp"] Mar 18 13:32:07.301978 master-0 kubenswrapper[28504]: I0318 13:32:07.299887 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ae6119a-3e75-4646-8461-44837271a5c4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67946df4bf-74qzp\" (UID: \"5ae6119a-3e75-4646-8461-44837271a5c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-74qzp" Mar 18 13:32:07.301978 master-0 kubenswrapper[28504]: I0318 13:32:07.299973 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3b5d185-c320-460f-8a39-0996af3acc72-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx\" (UID: \"c3b5d185-c320-460f-8a39-0996af3acc72\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx" Mar 18 13:32:07.301978 master-0 kubenswrapper[28504]: I0318 13:32:07.300008 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3b5d185-c320-460f-8a39-0996af3acc72-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx\" (UID: \"c3b5d185-c320-460f-8a39-0996af3acc72\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx" Mar 18 13:32:07.301978 master-0 kubenswrapper[28504]: I0318 13:32:07.300048 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ae6119a-3e75-4646-8461-44837271a5c4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67946df4bf-74qzp\" (UID: \"5ae6119a-3e75-4646-8461-44837271a5c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-74qzp" Mar 18 13:32:07.307451 master-0 kubenswrapper[28504]: I0318 13:32:07.306548 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ae6119a-3e75-4646-8461-44837271a5c4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67946df4bf-74qzp\" (UID: \"5ae6119a-3e75-4646-8461-44837271a5c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-74qzp" Mar 18 13:32:07.307752 master-0 kubenswrapper[28504]: I0318 13:32:07.307685 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3b5d185-c320-460f-8a39-0996af3acc72-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx\" (UID: \"c3b5d185-c320-460f-8a39-0996af3acc72\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx" Mar 18 13:32:07.307990 master-0 kubenswrapper[28504]: I0318 13:32:07.307926 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3b5d185-c320-460f-8a39-0996af3acc72-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx\" (UID: \"c3b5d185-c320-460f-8a39-0996af3acc72\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx" Mar 18 13:32:07.310549 master-0 kubenswrapper[28504]: I0318 13:32:07.310499 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ae6119a-3e75-4646-8461-44837271a5c4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67946df4bf-74qzp\" (UID: \"5ae6119a-3e75-4646-8461-44837271a5c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-74qzp" Mar 18 13:32:07.498008 master-0 kubenswrapper[28504]: I0318 13:32:07.494508 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx" Mar 18 13:32:07.515504 master-0 kubenswrapper[28504]: I0318 13:32:07.512433 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-74qzp" Mar 18 13:32:07.515504 master-0 kubenswrapper[28504]: I0318 13:32:07.512582 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-rxf85"] Mar 18 13:32:07.515504 master-0 kubenswrapper[28504]: I0318 13:32:07.514047 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-rxf85" Mar 18 13:32:07.518123 master-0 kubenswrapper[28504]: I0318 13:32:07.518091 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 18 13:32:07.542930 master-0 kubenswrapper[28504]: I0318 13:32:07.542692 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-rxf85"] Mar 18 13:32:07.706053 master-0 kubenswrapper[28504]: I0318 13:32:07.705985 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a411a13-62ee-4723-995c-48b9ddd11c48-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-rxf85\" (UID: \"8a411a13-62ee-4723-995c-48b9ddd11c48\") " pod="openshift-operators/observability-operator-6dd7dd855f-rxf85" Mar 18 13:32:07.706338 master-0 kubenswrapper[28504]: I0318 13:32:07.706096 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hknq5\" (UniqueName: \"kubernetes.io/projected/8a411a13-62ee-4723-995c-48b9ddd11c48-kube-api-access-hknq5\") pod \"observability-operator-6dd7dd855f-rxf85\" (UID: \"8a411a13-62ee-4723-995c-48b9ddd11c48\") " pod="openshift-operators/observability-operator-6dd7dd855f-rxf85" Mar 18 13:32:07.808900 master-0 kubenswrapper[28504]: I0318 13:32:07.807631 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hknq5\" (UniqueName: \"kubernetes.io/projected/8a411a13-62ee-4723-995c-48b9ddd11c48-kube-api-access-hknq5\") pod \"observability-operator-6dd7dd855f-rxf85\" (UID: \"8a411a13-62ee-4723-995c-48b9ddd11c48\") " pod="openshift-operators/observability-operator-6dd7dd855f-rxf85" Mar 18 13:32:07.808900 master-0 kubenswrapper[28504]: I0318 13:32:07.807789 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a411a13-62ee-4723-995c-48b9ddd11c48-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-rxf85\" (UID: \"8a411a13-62ee-4723-995c-48b9ddd11c48\") " pod="openshift-operators/observability-operator-6dd7dd855f-rxf85" Mar 18 13:32:07.815023 master-0 kubenswrapper[28504]: I0318 13:32:07.813355 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a411a13-62ee-4723-995c-48b9ddd11c48-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-rxf85\" (UID: \"8a411a13-62ee-4723-995c-48b9ddd11c48\") " pod="openshift-operators/observability-operator-6dd7dd855f-rxf85" Mar 18 13:32:07.833912 master-0 kubenswrapper[28504]: I0318 13:32:07.833797 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hknq5\" (UniqueName: \"kubernetes.io/projected/8a411a13-62ee-4723-995c-48b9ddd11c48-kube-api-access-hknq5\") pod \"observability-operator-6dd7dd855f-rxf85\" (UID: \"8a411a13-62ee-4723-995c-48b9ddd11c48\") " pod="openshift-operators/observability-operator-6dd7dd855f-rxf85" Mar 18 13:32:07.880960 master-0 kubenswrapper[28504]: I0318 13:32:07.880561 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-rxf85" Mar 18 13:32:07.988443 master-0 kubenswrapper[28504]: I0318 13:32:07.985753 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-6f5bc999fb-bzb9c"] Mar 18 13:32:07.988443 master-0 kubenswrapper[28504]: I0318 13:32:07.987087 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" Mar 18 13:32:07.992972 master-0 kubenswrapper[28504]: I0318 13:32:07.992884 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-service-cert" Mar 18 13:32:08.049907 master-0 kubenswrapper[28504]: I0318 13:32:08.048990 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f30bc0fa-c59c-4581-9176-6777591e1a33-openshift-service-ca\") pod \"perses-operator-6f5bc999fb-bzb9c\" (UID: \"f30bc0fa-c59c-4581-9176-6777591e1a33\") " pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" Mar 18 13:32:08.049907 master-0 kubenswrapper[28504]: I0318 13:32:08.049202 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f30bc0fa-c59c-4581-9176-6777591e1a33-apiservice-cert\") pod \"perses-operator-6f5bc999fb-bzb9c\" (UID: \"f30bc0fa-c59c-4581-9176-6777591e1a33\") " pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" Mar 18 13:32:08.055043 master-0 kubenswrapper[28504]: I0318 13:32:08.053065 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dwjg\" (UniqueName: \"kubernetes.io/projected/f30bc0fa-c59c-4581-9176-6777591e1a33-kube-api-access-7dwjg\") pod \"perses-operator-6f5bc999fb-bzb9c\" (UID: \"f30bc0fa-c59c-4581-9176-6777591e1a33\") " pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" Mar 18 13:32:08.055043 master-0 kubenswrapper[28504]: I0318 13:32:08.053149 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f30bc0fa-c59c-4581-9176-6777591e1a33-webhook-cert\") pod \"perses-operator-6f5bc999fb-bzb9c\" (UID: \"f30bc0fa-c59c-4581-9176-6777591e1a33\") " pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" Mar 18 13:32:08.071507 master-0 kubenswrapper[28504]: I0318 13:32:08.071297 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-6f5bc999fb-bzb9c"] Mar 18 13:32:08.160093 master-0 kubenswrapper[28504]: I0318 13:32:08.155352 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f30bc0fa-c59c-4581-9176-6777591e1a33-apiservice-cert\") pod \"perses-operator-6f5bc999fb-bzb9c\" (UID: \"f30bc0fa-c59c-4581-9176-6777591e1a33\") " pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" Mar 18 13:32:08.160093 master-0 kubenswrapper[28504]: I0318 13:32:08.155513 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dwjg\" (UniqueName: \"kubernetes.io/projected/f30bc0fa-c59c-4581-9176-6777591e1a33-kube-api-access-7dwjg\") pod \"perses-operator-6f5bc999fb-bzb9c\" (UID: \"f30bc0fa-c59c-4581-9176-6777591e1a33\") " pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" Mar 18 13:32:08.160093 master-0 kubenswrapper[28504]: I0318 13:32:08.155568 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f30bc0fa-c59c-4581-9176-6777591e1a33-webhook-cert\") pod \"perses-operator-6f5bc999fb-bzb9c\" (UID: \"f30bc0fa-c59c-4581-9176-6777591e1a33\") " pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" Mar 18 13:32:08.160093 master-0 kubenswrapper[28504]: I0318 13:32:08.155630 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f30bc0fa-c59c-4581-9176-6777591e1a33-openshift-service-ca\") pod \"perses-operator-6f5bc999fb-bzb9c\" (UID: \"f30bc0fa-c59c-4581-9176-6777591e1a33\") " pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" Mar 18 13:32:08.160093 master-0 kubenswrapper[28504]: I0318 13:32:08.157115 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f30bc0fa-c59c-4581-9176-6777591e1a33-openshift-service-ca\") pod \"perses-operator-6f5bc999fb-bzb9c\" (UID: \"f30bc0fa-c59c-4581-9176-6777591e1a33\") " pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" Mar 18 13:32:08.160559 master-0 kubenswrapper[28504]: I0318 13:32:08.160496 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f30bc0fa-c59c-4581-9176-6777591e1a33-apiservice-cert\") pod \"perses-operator-6f5bc999fb-bzb9c\" (UID: \"f30bc0fa-c59c-4581-9176-6777591e1a33\") " pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" Mar 18 13:32:08.164632 master-0 kubenswrapper[28504]: I0318 13:32:08.164573 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f30bc0fa-c59c-4581-9176-6777591e1a33-webhook-cert\") pod \"perses-operator-6f5bc999fb-bzb9c\" (UID: \"f30bc0fa-c59c-4581-9176-6777591e1a33\") " pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" Mar 18 13:32:08.179242 master-0 kubenswrapper[28504]: I0318 13:32:08.178566 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dwjg\" (UniqueName: \"kubernetes.io/projected/f30bc0fa-c59c-4581-9176-6777591e1a33-kube-api-access-7dwjg\") pod \"perses-operator-6f5bc999fb-bzb9c\" (UID: \"f30bc0fa-c59c-4581-9176-6777591e1a33\") " pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" Mar 18 13:32:08.377268 master-0 kubenswrapper[28504]: I0318 13:32:08.377131 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" Mar 18 13:32:13.715432 master-0 kubenswrapper[28504]: I0318 13:32:13.713202 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-6f5bc999fb-bzb9c"] Mar 18 13:32:13.852359 master-0 kubenswrapper[28504]: I0318 13:32:13.851383 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-4chrl"] Mar 18 13:32:13.872236 master-0 kubenswrapper[28504]: I0318 13:32:13.872145 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-74qzp"] Mar 18 13:32:13.898250 master-0 kubenswrapper[28504]: W0318 13:32:13.898051 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ef6fd38_0021_4460_be7d_eb73d64f4d71.slice/crio-37f9fb02f61988c26cfbdd91a69854e3aefad2dab0f1a01d5f80e2d7c3a995ae WatchSource:0}: Error finding container 37f9fb02f61988c26cfbdd91a69854e3aefad2dab0f1a01d5f80e2d7c3a995ae: Status 404 returned error can't find the container with id 37f9fb02f61988c26cfbdd91a69854e3aefad2dab0f1a01d5f80e2d7c3a995ae Mar 18 13:32:13.898512 master-0 kubenswrapper[28504]: W0318 13:32:13.898463 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ae6119a_3e75_4646_8461_44837271a5c4.slice/crio-6922f2c44c777d8213967368f250a955af17a839b562e43765858d2eb58aeb3d WatchSource:0}: Error finding container 6922f2c44c777d8213967368f250a955af17a839b562e43765858d2eb58aeb3d: Status 404 returned error can't find the container with id 6922f2c44c777d8213967368f250a955af17a839b562e43765858d2eb58aeb3d Mar 18 13:32:13.952328 master-0 kubenswrapper[28504]: I0318 13:32:13.952265 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx"] Mar 18 13:32:14.028989 master-0 kubenswrapper[28504]: I0318 13:32:14.026424 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-rxf85"] Mar 18 13:32:14.046769 master-0 kubenswrapper[28504]: W0318 13:32:14.045123 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a411a13_62ee_4723_995c_48b9ddd11c48.slice/crio-9c1ea4d79fe370205f3ad431c0576fcd4c0de3cc6933c01a9c178e0e1b2025a5 WatchSource:0}: Error finding container 9c1ea4d79fe370205f3ad431c0576fcd4c0de3cc6933c01a9c178e0e1b2025a5: Status 404 returned error can't find the container with id 9c1ea4d79fe370205f3ad431c0576fcd4c0de3cc6933c01a9c178e0e1b2025a5 Mar 18 13:32:14.343149 master-0 kubenswrapper[28504]: I0318 13:32:14.340191 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx" event={"ID":"c3b5d185-c320-460f-8a39-0996af3acc72","Type":"ContainerStarted","Data":"1f28f62a8383710dbc9d3cdb2c736c4f52a140ada7961aea628ba8a3aae46832"} Mar 18 13:32:14.343149 master-0 kubenswrapper[28504]: I0318 13:32:14.342350 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-c6675654-f8zcx" event={"ID":"4f488544-10c9-4e31-b183-60eb24cd6593","Type":"ContainerStarted","Data":"5007acff5435af88ce9aab1dc2999831dedd37724f0b4777cf3b03ad5618fac9"} Mar 18 13:32:14.343149 master-0 kubenswrapper[28504]: I0318 13:32:14.343090 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-c6675654-f8zcx" Mar 18 13:32:14.345601 master-0 kubenswrapper[28504]: I0318 13:32:14.345543 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-4chrl" event={"ID":"1ef6fd38-0021-4460-be7d-eb73d64f4d71","Type":"ContainerStarted","Data":"37f9fb02f61988c26cfbdd91a69854e3aefad2dab0f1a01d5f80e2d7c3a995ae"} Mar 18 13:32:14.346842 master-0 kubenswrapper[28504]: I0318 13:32:14.346777 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-74qzp" event={"ID":"5ae6119a-3e75-4646-8461-44837271a5c4","Type":"ContainerStarted","Data":"6922f2c44c777d8213967368f250a955af17a839b562e43765858d2eb58aeb3d"} Mar 18 13:32:14.353029 master-0 kubenswrapper[28504]: I0318 13:32:14.352685 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" event={"ID":"f30bc0fa-c59c-4581-9176-6777591e1a33","Type":"ContainerStarted","Data":"6b1dd0066e721d9d2609f618bcacd66e888c7bd4a20972a384788ade8ee8b195"} Mar 18 13:32:14.363911 master-0 kubenswrapper[28504]: I0318 13:32:14.359237 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj" event={"ID":"133f045a-3c88-4373-84e8-55217f947865","Type":"ContainerStarted","Data":"3e4dd110ed5294c23503a8dcb8ea97397ea97c9fafcb436ec683fc3e58b93045"} Mar 18 13:32:14.363911 master-0 kubenswrapper[28504]: I0318 13:32:14.360278 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj" Mar 18 13:32:14.366009 master-0 kubenswrapper[28504]: I0318 13:32:14.365957 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-rxf85" event={"ID":"8a411a13-62ee-4723-995c-48b9ddd11c48","Type":"ContainerStarted","Data":"9c1ea4d79fe370205f3ad431c0576fcd4c0de3cc6933c01a9c178e0e1b2025a5"} Mar 18 13:32:14.380011 master-0 kubenswrapper[28504]: I0318 13:32:14.379887 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-c6675654-f8zcx" podStartSLOduration=4.486228006 podStartE2EDuration="18.379859962s" podCreationTimestamp="2026-03-18 13:31:56 +0000 UTC" firstStartedPulling="2026-03-18 13:31:59.001663393 +0000 UTC m=+496.496469168" lastFinishedPulling="2026-03-18 13:32:12.895295359 +0000 UTC m=+510.390101124" observedRunningTime="2026-03-18 13:32:14.376834716 +0000 UTC m=+511.871640491" watchObservedRunningTime="2026-03-18 13:32:14.379859962 +0000 UTC m=+511.874665737" Mar 18 13:32:14.421520 master-0 kubenswrapper[28504]: I0318 13:32:14.421410 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj" podStartSLOduration=4.509999687 podStartE2EDuration="18.421383846s" podCreationTimestamp="2026-03-18 13:31:56 +0000 UTC" firstStartedPulling="2026-03-18 13:31:59.01076914 +0000 UTC m=+496.505574915" lastFinishedPulling="2026-03-18 13:32:12.922153289 +0000 UTC m=+510.416959074" observedRunningTime="2026-03-18 13:32:14.414699227 +0000 UTC m=+511.909505002" watchObservedRunningTime="2026-03-18 13:32:14.421383846 +0000 UTC m=+511.916189641" Mar 18 13:32:27.101923 master-0 kubenswrapper[28504]: I0318 13:32:27.101844 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-67655b5bb9-s6lrj" Mar 18 13:32:32.743662 master-0 kubenswrapper[28504]: I0318 13:32:32.743600 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-74qzp" event={"ID":"5ae6119a-3e75-4646-8461-44837271a5c4","Type":"ContainerStarted","Data":"d46e231f42f6c1cb415121e30275c622636c6aabcc5ff9ab542b4794ae135bad"} Mar 18 13:32:32.777415 master-0 kubenswrapper[28504]: I0318 13:32:32.777343 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" Mar 18 13:32:32.777415 master-0 kubenswrapper[28504]: I0318 13:32:32.777401 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" event={"ID":"f30bc0fa-c59c-4581-9176-6777591e1a33","Type":"ContainerStarted","Data":"93ed534c4b3c41af0634dafebd9187f9fc7fab8c2040d1b75b367e4fa238dbef"} Mar 18 13:32:32.777415 master-0 kubenswrapper[28504]: I0318 13:32:32.777426 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx" event={"ID":"c3b5d185-c320-460f-8a39-0996af3acc72","Type":"ContainerStarted","Data":"2c4af256692c9829ad3845f68cc4e27703c8f798ade02b06c885cb3945293330"} Mar 18 13:32:32.785655 master-0 kubenswrapper[28504]: I0318 13:32:32.784796 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-74qzp" podStartSLOduration=7.532763525 podStartE2EDuration="25.784768237s" podCreationTimestamp="2026-03-18 13:32:07 +0000 UTC" firstStartedPulling="2026-03-18 13:32:13.906078648 +0000 UTC m=+511.400884423" lastFinishedPulling="2026-03-18 13:32:32.15808335 +0000 UTC m=+529.652889135" observedRunningTime="2026-03-18 13:32:32.777961055 +0000 UTC m=+530.272766830" watchObservedRunningTime="2026-03-18 13:32:32.784768237 +0000 UTC m=+530.279574012" Mar 18 13:32:32.899442 master-0 kubenswrapper[28504]: I0318 13:32:32.898279 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" podStartSLOduration=7.556452695 podStartE2EDuration="25.898255036s" podCreationTimestamp="2026-03-18 13:32:07 +0000 UTC" firstStartedPulling="2026-03-18 13:32:13.766884052 +0000 UTC m=+511.261689827" lastFinishedPulling="2026-03-18 13:32:32.108686383 +0000 UTC m=+529.603492168" observedRunningTime="2026-03-18 13:32:32.815413494 +0000 UTC m=+530.310219279" watchObservedRunningTime="2026-03-18 13:32:32.898255036 +0000 UTC m=+530.393060811" Mar 18 13:32:32.899863 master-0 kubenswrapper[28504]: I0318 13:32:32.899651 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx" podStartSLOduration=7.690198916 podStartE2EDuration="25.899640815s" podCreationTimestamp="2026-03-18 13:32:07 +0000 UTC" firstStartedPulling="2026-03-18 13:32:13.946766948 +0000 UTC m=+511.441572723" lastFinishedPulling="2026-03-18 13:32:32.156208847 +0000 UTC m=+529.651014622" observedRunningTime="2026-03-18 13:32:32.873488396 +0000 UTC m=+530.368294171" watchObservedRunningTime="2026-03-18 13:32:32.899640815 +0000 UTC m=+530.394446600" Mar 18 13:32:33.786363 master-0 kubenswrapper[28504]: I0318 13:32:33.785839 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-rxf85" event={"ID":"8a411a13-62ee-4723-995c-48b9ddd11c48","Type":"ContainerStarted","Data":"6a45ff422cd01e48bb177691f8c1bb00e2bd9ee8682cf893b16503aa193fd807"} Mar 18 13:32:33.786972 master-0 kubenswrapper[28504]: I0318 13:32:33.786388 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-rxf85" Mar 18 13:32:33.789220 master-0 kubenswrapper[28504]: I0318 13:32:33.789180 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-6dd7dd855f-rxf85" Mar 18 13:32:33.795504 master-0 kubenswrapper[28504]: I0318 13:32:33.795433 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-4chrl" event={"ID":"1ef6fd38-0021-4460-be7d-eb73d64f4d71","Type":"ContainerStarted","Data":"47199b793845b4271adb77f8a463e3fc2dab9489858ef86158978f619398f7be"} Mar 18 13:32:33.816668 master-0 kubenswrapper[28504]: I0318 13:32:33.816561 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-6dd7dd855f-rxf85" podStartSLOduration=8.524872366 podStartE2EDuration="26.816543529s" podCreationTimestamp="2026-03-18 13:32:07 +0000 UTC" firstStartedPulling="2026-03-18 13:32:14.054496374 +0000 UTC m=+511.549302149" lastFinishedPulling="2026-03-18 13:32:32.346167537 +0000 UTC m=+529.840973312" observedRunningTime="2026-03-18 13:32:33.814702177 +0000 UTC m=+531.309507962" watchObservedRunningTime="2026-03-18 13:32:33.816543529 +0000 UTC m=+531.311349294" Mar 18 13:32:33.841521 master-0 kubenswrapper[28504]: I0318 13:32:33.841422 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-8ff7d675-4chrl" podStartSLOduration=9.592619541 podStartE2EDuration="27.841401662s" podCreationTimestamp="2026-03-18 13:32:06 +0000 UTC" firstStartedPulling="2026-03-18 13:32:13.90865118 +0000 UTC m=+511.403456955" lastFinishedPulling="2026-03-18 13:32:32.157433291 +0000 UTC m=+529.652239076" observedRunningTime="2026-03-18 13:32:33.836565086 +0000 UTC m=+531.331370881" watchObservedRunningTime="2026-03-18 13:32:33.841401662 +0000 UTC m=+531.336207427" Mar 18 13:32:38.380825 master-0 kubenswrapper[28504]: I0318 13:32:38.380521 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-6f5bc999fb-bzb9c" Mar 18 13:32:46.651431 master-0 kubenswrapper[28504]: I0318 13:32:46.651092 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-c6675654-f8zcx" Mar 18 13:32:55.762113 master-0 kubenswrapper[28504]: I0318 13:32:55.762028 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-bf9qb"] Mar 18 13:32:55.764768 master-0 kubenswrapper[28504]: I0318 13:32:55.764328 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-bf9qb" Mar 18 13:32:55.767399 master-0 kubenswrapper[28504]: I0318 13:32:55.767354 28504 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 18 13:32:55.792579 master-0 kubenswrapper[28504]: I0318 13:32:55.790404 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-n54zk"] Mar 18 13:32:55.803604 master-0 kubenswrapper[28504]: I0318 13:32:55.803292 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-bf9qb"] Mar 18 13:32:55.803604 master-0 kubenswrapper[28504]: I0318 13:32:55.803438 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:55.810053 master-0 kubenswrapper[28504]: I0318 13:32:55.809582 28504 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 18 13:32:55.811393 master-0 kubenswrapper[28504]: I0318 13:32:55.811099 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 18 13:32:55.904544 master-0 kubenswrapper[28504]: I0318 13:32:55.904456 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8vhq\" (UniqueName: \"kubernetes.io/projected/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-kube-api-access-k8vhq\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:55.904802 master-0 kubenswrapper[28504]: I0318 13:32:55.904721 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-frr-sockets\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:55.904802 master-0 kubenswrapper[28504]: I0318 13:32:55.904785 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-metrics\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:55.904888 master-0 kubenswrapper[28504]: I0318 13:32:55.904868 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-frr-startup\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:55.904923 master-0 kubenswrapper[28504]: I0318 13:32:55.904893 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-metrics-certs\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:55.904980 master-0 kubenswrapper[28504]: I0318 13:32:55.904956 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-reloader\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:55.905023 master-0 kubenswrapper[28504]: I0318 13:32:55.905001 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz79f\" (UniqueName: \"kubernetes.io/projected/6fd63f11-ffae-4160-9728-05059c09ef4d-kube-api-access-rz79f\") pod \"frr-k8s-webhook-server-bcc4b6f68-bf9qb\" (UID: \"6fd63f11-ffae-4160-9728-05059c09ef4d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-bf9qb" Mar 18 13:32:55.905066 master-0 kubenswrapper[28504]: I0318 13:32:55.905050 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6fd63f11-ffae-4160-9728-05059c09ef4d-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-bf9qb\" (UID: \"6fd63f11-ffae-4160-9728-05059c09ef4d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-bf9qb" Mar 18 13:32:55.905148 master-0 kubenswrapper[28504]: I0318 13:32:55.905094 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-frr-conf\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:55.930100 master-0 kubenswrapper[28504]: I0318 13:32:55.930049 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-qbx7x"] Mar 18 13:32:55.931511 master-0 kubenswrapper[28504]: I0318 13:32:55.931482 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qbx7x" Mar 18 13:32:55.934325 master-0 kubenswrapper[28504]: I0318 13:32:55.934278 28504 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 18 13:32:55.934687 master-0 kubenswrapper[28504]: I0318 13:32:55.934660 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 18 13:32:55.935458 master-0 kubenswrapper[28504]: I0318 13:32:55.934742 28504 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 18 13:32:55.980025 master-0 kubenswrapper[28504]: I0318 13:32:55.973370 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-8v548"] Mar 18 13:32:55.980025 master-0 kubenswrapper[28504]: I0318 13:32:55.974925 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-8v548" Mar 18 13:32:55.980025 master-0 kubenswrapper[28504]: I0318 13:32:55.977034 28504 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 18 13:32:55.992739 master-0 kubenswrapper[28504]: I0318 13:32:55.990982 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-8v548"] Mar 18 13:32:56.008006 master-0 kubenswrapper[28504]: I0318 13:32:56.006736 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-frr-startup\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:56.008006 master-0 kubenswrapper[28504]: I0318 13:32:56.006784 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-metrics-certs\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:56.008006 master-0 kubenswrapper[28504]: I0318 13:32:56.006809 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-reloader\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:56.008006 master-0 kubenswrapper[28504]: I0318 13:32:56.006829 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz79f\" (UniqueName: \"kubernetes.io/projected/6fd63f11-ffae-4160-9728-05059c09ef4d-kube-api-access-rz79f\") pod \"frr-k8s-webhook-server-bcc4b6f68-bf9qb\" (UID: \"6fd63f11-ffae-4160-9728-05059c09ef4d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-bf9qb" Mar 18 13:32:56.008006 master-0 kubenswrapper[28504]: I0318 13:32:56.006859 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6fd63f11-ffae-4160-9728-05059c09ef4d-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-bf9qb\" (UID: \"6fd63f11-ffae-4160-9728-05059c09ef4d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-bf9qb" Mar 18 13:32:56.008006 master-0 kubenswrapper[28504]: I0318 13:32:56.006878 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-frr-conf\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:56.008006 master-0 kubenswrapper[28504]: I0318 13:32:56.006922 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8vhq\" (UniqueName: \"kubernetes.io/projected/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-kube-api-access-k8vhq\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:56.008006 master-0 kubenswrapper[28504]: I0318 13:32:56.006966 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-frr-sockets\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:56.008006 master-0 kubenswrapper[28504]: I0318 13:32:56.006992 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-metrics\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:56.008006 master-0 kubenswrapper[28504]: I0318 13:32:56.007400 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-metrics\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:56.008483 master-0 kubenswrapper[28504]: I0318 13:32:56.008246 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-frr-startup\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:56.008483 master-0 kubenswrapper[28504]: E0318 13:32:56.008333 28504 secret.go:189] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Mar 18 13:32:56.008483 master-0 kubenswrapper[28504]: E0318 13:32:56.008376 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-metrics-certs podName:eb89e2c4-b8c0-45ff-aa69-eaceb8838561 nodeName:}" failed. No retries permitted until 2026-03-18 13:32:56.508360737 +0000 UTC m=+554.003166512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-metrics-certs") pod "frr-k8s-n54zk" (UID: "eb89e2c4-b8c0-45ff-aa69-eaceb8838561") : secret "frr-k8s-certs-secret" not found Mar 18 13:32:56.008989 master-0 kubenswrapper[28504]: I0318 13:32:56.008930 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-frr-conf\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:56.009108 master-0 kubenswrapper[28504]: I0318 13:32:56.009083 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-frr-sockets\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:56.009616 master-0 kubenswrapper[28504]: I0318 13:32:56.009514 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-reloader\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:56.015796 master-0 kubenswrapper[28504]: I0318 13:32:56.015685 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6fd63f11-ffae-4160-9728-05059c09ef4d-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-bf9qb\" (UID: \"6fd63f11-ffae-4160-9728-05059c09ef4d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-bf9qb" Mar 18 13:32:56.056963 master-0 kubenswrapper[28504]: I0318 13:32:56.048861 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8vhq\" (UniqueName: \"kubernetes.io/projected/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-kube-api-access-k8vhq\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:56.056963 master-0 kubenswrapper[28504]: I0318 13:32:56.049995 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz79f\" (UniqueName: \"kubernetes.io/projected/6fd63f11-ffae-4160-9728-05059c09ef4d-kube-api-access-rz79f\") pod \"frr-k8s-webhook-server-bcc4b6f68-bf9qb\" (UID: \"6fd63f11-ffae-4160-9728-05059c09ef4d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-bf9qb" Mar 18 13:32:56.109979 master-0 kubenswrapper[28504]: I0318 13:32:56.108519 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8733982a-3ee0-4a7d-b811-9e79ce602150-cert\") pod \"controller-7bb4cc7c98-8v548\" (UID: \"8733982a-3ee0-4a7d-b811-9e79ce602150\") " pod="metallb-system/controller-7bb4cc7c98-8v548" Mar 18 13:32:56.109979 master-0 kubenswrapper[28504]: I0318 13:32:56.108627 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8733982a-3ee0-4a7d-b811-9e79ce602150-metrics-certs\") pod \"controller-7bb4cc7c98-8v548\" (UID: \"8733982a-3ee0-4a7d-b811-9e79ce602150\") " pod="metallb-system/controller-7bb4cc7c98-8v548" Mar 18 13:32:56.109979 master-0 kubenswrapper[28504]: I0318 13:32:56.108711 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-metrics-certs\") pod \"speaker-qbx7x\" (UID: \"4998b602-5dd9-4ce5-90ff-85e81b4d51fe\") " pod="metallb-system/speaker-qbx7x" Mar 18 13:32:56.109979 master-0 kubenswrapper[28504]: I0318 13:32:56.108743 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-memberlist\") pod \"speaker-qbx7x\" (UID: \"4998b602-5dd9-4ce5-90ff-85e81b4d51fe\") " pod="metallb-system/speaker-qbx7x" Mar 18 13:32:56.109979 master-0 kubenswrapper[28504]: I0318 13:32:56.108795 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-metallb-excludel2\") pod \"speaker-qbx7x\" (UID: \"4998b602-5dd9-4ce5-90ff-85e81b4d51fe\") " pod="metallb-system/speaker-qbx7x" Mar 18 13:32:56.109979 master-0 kubenswrapper[28504]: I0318 13:32:56.108845 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccnkb\" (UniqueName: \"kubernetes.io/projected/8733982a-3ee0-4a7d-b811-9e79ce602150-kube-api-access-ccnkb\") pod \"controller-7bb4cc7c98-8v548\" (UID: \"8733982a-3ee0-4a7d-b811-9e79ce602150\") " pod="metallb-system/controller-7bb4cc7c98-8v548" Mar 18 13:32:56.109979 master-0 kubenswrapper[28504]: I0318 13:32:56.108878 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpmp4\" (UniqueName: \"kubernetes.io/projected/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-kube-api-access-mpmp4\") pod \"speaker-qbx7x\" (UID: \"4998b602-5dd9-4ce5-90ff-85e81b4d51fe\") " pod="metallb-system/speaker-qbx7x" Mar 18 13:32:56.111437 master-0 kubenswrapper[28504]: I0318 13:32:56.111362 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-bf9qb" Mar 18 13:32:56.210376 master-0 kubenswrapper[28504]: I0318 13:32:56.210301 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccnkb\" (UniqueName: \"kubernetes.io/projected/8733982a-3ee0-4a7d-b811-9e79ce602150-kube-api-access-ccnkb\") pod \"controller-7bb4cc7c98-8v548\" (UID: \"8733982a-3ee0-4a7d-b811-9e79ce602150\") " pod="metallb-system/controller-7bb4cc7c98-8v548" Mar 18 13:32:56.210652 master-0 kubenswrapper[28504]: I0318 13:32:56.210391 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpmp4\" (UniqueName: \"kubernetes.io/projected/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-kube-api-access-mpmp4\") pod \"speaker-qbx7x\" (UID: \"4998b602-5dd9-4ce5-90ff-85e81b4d51fe\") " pod="metallb-system/speaker-qbx7x" Mar 18 13:32:56.210652 master-0 kubenswrapper[28504]: I0318 13:32:56.210462 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8733982a-3ee0-4a7d-b811-9e79ce602150-cert\") pod \"controller-7bb4cc7c98-8v548\" (UID: \"8733982a-3ee0-4a7d-b811-9e79ce602150\") " pod="metallb-system/controller-7bb4cc7c98-8v548" Mar 18 13:32:56.210652 master-0 kubenswrapper[28504]: I0318 13:32:56.210506 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8733982a-3ee0-4a7d-b811-9e79ce602150-metrics-certs\") pod \"controller-7bb4cc7c98-8v548\" (UID: \"8733982a-3ee0-4a7d-b811-9e79ce602150\") " pod="metallb-system/controller-7bb4cc7c98-8v548" Mar 18 13:32:56.210866 master-0 kubenswrapper[28504]: I0318 13:32:56.210810 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-metrics-certs\") pod \"speaker-qbx7x\" (UID: \"4998b602-5dd9-4ce5-90ff-85e81b4d51fe\") " pod="metallb-system/speaker-qbx7x" Mar 18 13:32:56.210951 master-0 kubenswrapper[28504]: I0318 13:32:56.210891 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-memberlist\") pod \"speaker-qbx7x\" (UID: \"4998b602-5dd9-4ce5-90ff-85e81b4d51fe\") " pod="metallb-system/speaker-qbx7x" Mar 18 13:32:56.210993 master-0 kubenswrapper[28504]: I0318 13:32:56.210962 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-metallb-excludel2\") pod \"speaker-qbx7x\" (UID: \"4998b602-5dd9-4ce5-90ff-85e81b4d51fe\") " pod="metallb-system/speaker-qbx7x" Mar 18 13:32:56.211055 master-0 kubenswrapper[28504]: E0318 13:32:56.211030 28504 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 18 13:32:56.211124 master-0 kubenswrapper[28504]: E0318 13:32:56.211098 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-memberlist podName:4998b602-5dd9-4ce5-90ff-85e81b4d51fe nodeName:}" failed. No retries permitted until 2026-03-18 13:32:56.711076049 +0000 UTC m=+554.205881824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-memberlist") pod "speaker-qbx7x" (UID: "4998b602-5dd9-4ce5-90ff-85e81b4d51fe") : secret "metallb-memberlist" not found Mar 18 13:32:56.211824 master-0 kubenswrapper[28504]: I0318 13:32:56.211800 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-metallb-excludel2\") pod \"speaker-qbx7x\" (UID: \"4998b602-5dd9-4ce5-90ff-85e81b4d51fe\") " pod="metallb-system/speaker-qbx7x" Mar 18 13:32:56.212007 master-0 kubenswrapper[28504]: I0318 13:32:56.211989 28504 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 18 13:32:56.214349 master-0 kubenswrapper[28504]: I0318 13:32:56.214323 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-metrics-certs\") pod \"speaker-qbx7x\" (UID: \"4998b602-5dd9-4ce5-90ff-85e81b4d51fe\") " pod="metallb-system/speaker-qbx7x" Mar 18 13:32:56.214725 master-0 kubenswrapper[28504]: I0318 13:32:56.214685 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8733982a-3ee0-4a7d-b811-9e79ce602150-metrics-certs\") pod \"controller-7bb4cc7c98-8v548\" (UID: \"8733982a-3ee0-4a7d-b811-9e79ce602150\") " pod="metallb-system/controller-7bb4cc7c98-8v548" Mar 18 13:32:56.231346 master-0 kubenswrapper[28504]: I0318 13:32:56.224467 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8733982a-3ee0-4a7d-b811-9e79ce602150-cert\") pod \"controller-7bb4cc7c98-8v548\" (UID: \"8733982a-3ee0-4a7d-b811-9e79ce602150\") " pod="metallb-system/controller-7bb4cc7c98-8v548" Mar 18 13:32:56.236352 master-0 kubenswrapper[28504]: I0318 13:32:56.236309 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpmp4\" (UniqueName: \"kubernetes.io/projected/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-kube-api-access-mpmp4\") pod \"speaker-qbx7x\" (UID: \"4998b602-5dd9-4ce5-90ff-85e81b4d51fe\") " pod="metallb-system/speaker-qbx7x" Mar 18 13:32:56.240195 master-0 kubenswrapper[28504]: I0318 13:32:56.240144 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccnkb\" (UniqueName: \"kubernetes.io/projected/8733982a-3ee0-4a7d-b811-9e79ce602150-kube-api-access-ccnkb\") pod \"controller-7bb4cc7c98-8v548\" (UID: \"8733982a-3ee0-4a7d-b811-9e79ce602150\") " pod="metallb-system/controller-7bb4cc7c98-8v548" Mar 18 13:32:56.316497 master-0 kubenswrapper[28504]: I0318 13:32:56.316335 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-8v548" Mar 18 13:32:56.515896 master-0 kubenswrapper[28504]: I0318 13:32:56.515801 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-metrics-certs\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:56.519185 master-0 kubenswrapper[28504]: I0318 13:32:56.519147 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb89e2c4-b8c0-45ff-aa69-eaceb8838561-metrics-certs\") pod \"frr-k8s-n54zk\" (UID: \"eb89e2c4-b8c0-45ff-aa69-eaceb8838561\") " pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:56.543127 master-0 kubenswrapper[28504]: W0318 13:32:56.543057 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fd63f11_ffae_4160_9728_05059c09ef4d.slice/crio-e11e19a0e2e725ade9d18defeab24422520e17f3888f5a9ea9a48480c119ba73 WatchSource:0}: Error finding container e11e19a0e2e725ade9d18defeab24422520e17f3888f5a9ea9a48480c119ba73: Status 404 returned error can't find the container with id e11e19a0e2e725ade9d18defeab24422520e17f3888f5a9ea9a48480c119ba73 Mar 18 13:32:56.543500 master-0 kubenswrapper[28504]: I0318 13:32:56.543388 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-bf9qb"] Mar 18 13:32:56.713826 master-0 kubenswrapper[28504]: I0318 13:32:56.713732 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-8v548"] Mar 18 13:32:56.714788 master-0 kubenswrapper[28504]: W0318 13:32:56.714720 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8733982a_3ee0_4a7d_b811_9e79ce602150.slice/crio-0af0092ab044273e79433ab9a953d84ff7fa51c8551029b6ba04b7c30f63ef9e WatchSource:0}: Error finding container 0af0092ab044273e79433ab9a953d84ff7fa51c8551029b6ba04b7c30f63ef9e: Status 404 returned error can't find the container with id 0af0092ab044273e79433ab9a953d84ff7fa51c8551029b6ba04b7c30f63ef9e Mar 18 13:32:56.718270 master-0 kubenswrapper[28504]: I0318 13:32:56.718226 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-memberlist\") pod \"speaker-qbx7x\" (UID: \"4998b602-5dd9-4ce5-90ff-85e81b4d51fe\") " pod="metallb-system/speaker-qbx7x" Mar 18 13:32:56.718453 master-0 kubenswrapper[28504]: E0318 13:32:56.718417 28504 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 18 13:32:56.718539 master-0 kubenswrapper[28504]: E0318 13:32:56.718508 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-memberlist podName:4998b602-5dd9-4ce5-90ff-85e81b4d51fe nodeName:}" failed. No retries permitted until 2026-03-18 13:32:57.718483043 +0000 UTC m=+555.213288828 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-memberlist") pod "speaker-qbx7x" (UID: "4998b602-5dd9-4ce5-90ff-85e81b4d51fe") : secret "metallb-memberlist" not found Mar 18 13:32:56.742229 master-0 kubenswrapper[28504]: I0318 13:32:56.742175 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-n54zk" Mar 18 13:32:57.029107 master-0 kubenswrapper[28504]: I0318 13:32:57.026082 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n54zk" event={"ID":"eb89e2c4-b8c0-45ff-aa69-eaceb8838561","Type":"ContainerStarted","Data":"8ee664d956c3fb765be3bfd8e1bc3f5302c187d32d4d07110b4a8573f0972372"} Mar 18 13:32:57.029603 master-0 kubenswrapper[28504]: I0318 13:32:57.029534 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-bf9qb" event={"ID":"6fd63f11-ffae-4160-9728-05059c09ef4d","Type":"ContainerStarted","Data":"e11e19a0e2e725ade9d18defeab24422520e17f3888f5a9ea9a48480c119ba73"} Mar 18 13:32:57.031033 master-0 kubenswrapper[28504]: I0318 13:32:57.030999 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-8v548" event={"ID":"8733982a-3ee0-4a7d-b811-9e79ce602150","Type":"ContainerStarted","Data":"9fdd995aadaac77f20e0cd4c766b4f3f9722aadc92bb10a90c2f916d4fbf6979"} Mar 18 13:32:57.031104 master-0 kubenswrapper[28504]: I0318 13:32:57.031035 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-8v548" event={"ID":"8733982a-3ee0-4a7d-b811-9e79ce602150","Type":"ContainerStarted","Data":"0af0092ab044273e79433ab9a953d84ff7fa51c8551029b6ba04b7c30f63ef9e"} Mar 18 13:32:57.741891 master-0 kubenswrapper[28504]: I0318 13:32:57.741834 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-memberlist\") pod \"speaker-qbx7x\" (UID: \"4998b602-5dd9-4ce5-90ff-85e81b4d51fe\") " pod="metallb-system/speaker-qbx7x" Mar 18 13:32:57.745229 master-0 kubenswrapper[28504]: I0318 13:32:57.745185 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4998b602-5dd9-4ce5-90ff-85e81b4d51fe-memberlist\") pod \"speaker-qbx7x\" (UID: \"4998b602-5dd9-4ce5-90ff-85e81b4d51fe\") " pod="metallb-system/speaker-qbx7x" Mar 18 13:32:57.801876 master-0 kubenswrapper[28504]: I0318 13:32:57.801799 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qbx7x" Mar 18 13:32:57.821302 master-0 kubenswrapper[28504]: W0318 13:32:57.821193 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4998b602_5dd9_4ce5_90ff_85e81b4d51fe.slice/crio-50bfa4ce71e27058969b4aaf06c9aa223fed825118fcbb514af402fe4e44a8d7 WatchSource:0}: Error finding container 50bfa4ce71e27058969b4aaf06c9aa223fed825118fcbb514af402fe4e44a8d7: Status 404 returned error can't find the container with id 50bfa4ce71e27058969b4aaf06c9aa223fed825118fcbb514af402fe4e44a8d7 Mar 18 13:32:58.022370 master-0 kubenswrapper[28504]: I0318 13:32:58.022258 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-m55sr"] Mar 18 13:32:58.033978 master-0 kubenswrapper[28504]: I0318 13:32:58.023696 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m55sr" Mar 18 13:32:58.050441 master-0 kubenswrapper[28504]: I0318 13:32:58.050362 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-5snlm"] Mar 18 13:32:58.051762 master-0 kubenswrapper[28504]: I0318 13:32:58.051730 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-5snlm" Mar 18 13:32:58.055197 master-0 kubenswrapper[28504]: I0318 13:32:58.054105 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 18 13:32:58.072230 master-0 kubenswrapper[28504]: I0318 13:32:58.072112 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-m55sr"] Mar 18 13:32:58.075549 master-0 kubenswrapper[28504]: I0318 13:32:58.075498 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qbx7x" event={"ID":"4998b602-5dd9-4ce5-90ff-85e81b4d51fe","Type":"ContainerStarted","Data":"50bfa4ce71e27058969b4aaf06c9aa223fed825118fcbb514af402fe4e44a8d7"} Mar 18 13:32:58.101196 master-0 kubenswrapper[28504]: I0318 13:32:58.101113 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-5snlm"] Mar 18 13:32:58.108413 master-0 kubenswrapper[28504]: I0318 13:32:58.108363 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-pl77f"] Mar 18 13:32:58.109700 master-0 kubenswrapper[28504]: I0318 13:32:58.109637 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pl77f" Mar 18 13:32:58.151967 master-0 kubenswrapper[28504]: I0318 13:32:58.149885 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsq59\" (UniqueName: \"kubernetes.io/projected/3cdd455d-0f9a-4c4c-99d3-231f0dd90d04-kube-api-access-jsq59\") pod \"nmstate-metrics-9b8c8685d-m55sr\" (UID: \"3cdd455d-0f9a-4c4c-99d3-231f0dd90d04\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m55sr" Mar 18 13:32:58.151967 master-0 kubenswrapper[28504]: I0318 13:32:58.150029 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4ggs\" (UniqueName: \"kubernetes.io/projected/ce064a48-4cf9-4160-82f0-307c9a64733b-kube-api-access-l4ggs\") pod \"nmstate-webhook-5f558f5558-5snlm\" (UID: \"ce064a48-4cf9-4160-82f0-307c9a64733b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-5snlm" Mar 18 13:32:58.151967 master-0 kubenswrapper[28504]: I0318 13:32:58.150108 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ce064a48-4cf9-4160-82f0-307c9a64733b-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-5snlm\" (UID: \"ce064a48-4cf9-4160-82f0-307c9a64733b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-5snlm" Mar 18 13:32:58.235970 master-0 kubenswrapper[28504]: I0318 13:32:58.234276 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf"] Mar 18 13:32:58.235970 master-0 kubenswrapper[28504]: I0318 13:32:58.235916 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf" Mar 18 13:32:58.244053 master-0 kubenswrapper[28504]: I0318 13:32:58.243465 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 18 13:32:58.244053 master-0 kubenswrapper[28504]: I0318 13:32:58.243741 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 18 13:32:58.256104 master-0 kubenswrapper[28504]: I0318 13:32:58.256029 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf"] Mar 18 13:32:58.256341 master-0 kubenswrapper[28504]: I0318 13:32:58.256146 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7da48df5-5ace-4bcb-a96f-a96bea9b7657-nmstate-lock\") pod \"nmstate-handler-pl77f\" (UID: \"7da48df5-5ace-4bcb-a96f-a96bea9b7657\") " pod="openshift-nmstate/nmstate-handler-pl77f" Mar 18 13:32:58.256341 master-0 kubenswrapper[28504]: I0318 13:32:58.256225 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ce064a48-4cf9-4160-82f0-307c9a64733b-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-5snlm\" (UID: \"ce064a48-4cf9-4160-82f0-307c9a64733b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-5snlm" Mar 18 13:32:58.256494 master-0 kubenswrapper[28504]: I0318 13:32:58.256455 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7da48df5-5ace-4bcb-a96f-a96bea9b7657-ovs-socket\") pod \"nmstate-handler-pl77f\" (UID: \"7da48df5-5ace-4bcb-a96f-a96bea9b7657\") " pod="openshift-nmstate/nmstate-handler-pl77f" Mar 18 13:32:58.256636 master-0 kubenswrapper[28504]: I0318 13:32:58.256603 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsq59\" (UniqueName: \"kubernetes.io/projected/3cdd455d-0f9a-4c4c-99d3-231f0dd90d04-kube-api-access-jsq59\") pod \"nmstate-metrics-9b8c8685d-m55sr\" (UID: \"3cdd455d-0f9a-4c4c-99d3-231f0dd90d04\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m55sr" Mar 18 13:32:58.256706 master-0 kubenswrapper[28504]: I0318 13:32:58.256682 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7da48df5-5ace-4bcb-a96f-a96bea9b7657-dbus-socket\") pod \"nmstate-handler-pl77f\" (UID: \"7da48df5-5ace-4bcb-a96f-a96bea9b7657\") " pod="openshift-nmstate/nmstate-handler-pl77f" Mar 18 13:32:58.256873 master-0 kubenswrapper[28504]: I0318 13:32:58.256831 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpkbt\" (UniqueName: \"kubernetes.io/projected/7da48df5-5ace-4bcb-a96f-a96bea9b7657-kube-api-access-bpkbt\") pod \"nmstate-handler-pl77f\" (UID: \"7da48df5-5ace-4bcb-a96f-a96bea9b7657\") " pod="openshift-nmstate/nmstate-handler-pl77f" Mar 18 13:32:58.256990 master-0 kubenswrapper[28504]: I0318 13:32:58.256967 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4ggs\" (UniqueName: \"kubernetes.io/projected/ce064a48-4cf9-4160-82f0-307c9a64733b-kube-api-access-l4ggs\") pod \"nmstate-webhook-5f558f5558-5snlm\" (UID: \"ce064a48-4cf9-4160-82f0-307c9a64733b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-5snlm" Mar 18 13:32:58.259768 master-0 kubenswrapper[28504]: I0318 13:32:58.259728 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ce064a48-4cf9-4160-82f0-307c9a64733b-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-5snlm\" (UID: \"ce064a48-4cf9-4160-82f0-307c9a64733b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-5snlm" Mar 18 13:32:58.323533 master-0 kubenswrapper[28504]: I0318 13:32:58.323488 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsq59\" (UniqueName: \"kubernetes.io/projected/3cdd455d-0f9a-4c4c-99d3-231f0dd90d04-kube-api-access-jsq59\") pod \"nmstate-metrics-9b8c8685d-m55sr\" (UID: \"3cdd455d-0f9a-4c4c-99d3-231f0dd90d04\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m55sr" Mar 18 13:32:58.334359 master-0 kubenswrapper[28504]: I0318 13:32:58.333765 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4ggs\" (UniqueName: \"kubernetes.io/projected/ce064a48-4cf9-4160-82f0-307c9a64733b-kube-api-access-l4ggs\") pod \"nmstate-webhook-5f558f5558-5snlm\" (UID: \"ce064a48-4cf9-4160-82f0-307c9a64733b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-5snlm" Mar 18 13:32:58.363067 master-0 kubenswrapper[28504]: I0318 13:32:58.358655 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7da48df5-5ace-4bcb-a96f-a96bea9b7657-ovs-socket\") pod \"nmstate-handler-pl77f\" (UID: \"7da48df5-5ace-4bcb-a96f-a96bea9b7657\") " pod="openshift-nmstate/nmstate-handler-pl77f" Mar 18 13:32:58.363067 master-0 kubenswrapper[28504]: I0318 13:32:58.358709 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxnbl\" (UniqueName: \"kubernetes.io/projected/79aaf490-69d3-404d-9c69-e062717930a0-kube-api-access-bxnbl\") pod \"nmstate-console-plugin-86f58fcf4-qsfzf\" (UID: \"79aaf490-69d3-404d-9c69-e062717930a0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf" Mar 18 13:32:58.363067 master-0 kubenswrapper[28504]: I0318 13:32:58.358764 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/79aaf490-69d3-404d-9c69-e062717930a0-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-qsfzf\" (UID: \"79aaf490-69d3-404d-9c69-e062717930a0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf" Mar 18 13:32:58.363067 master-0 kubenswrapper[28504]: I0318 13:32:58.358786 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7da48df5-5ace-4bcb-a96f-a96bea9b7657-dbus-socket\") pod \"nmstate-handler-pl77f\" (UID: \"7da48df5-5ace-4bcb-a96f-a96bea9b7657\") " pod="openshift-nmstate/nmstate-handler-pl77f" Mar 18 13:32:58.363067 master-0 kubenswrapper[28504]: I0318 13:32:58.358829 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpkbt\" (UniqueName: \"kubernetes.io/projected/7da48df5-5ace-4bcb-a96f-a96bea9b7657-kube-api-access-bpkbt\") pod \"nmstate-handler-pl77f\" (UID: \"7da48df5-5ace-4bcb-a96f-a96bea9b7657\") " pod="openshift-nmstate/nmstate-handler-pl77f" Mar 18 13:32:58.363067 master-0 kubenswrapper[28504]: I0318 13:32:58.358863 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/79aaf490-69d3-404d-9c69-e062717930a0-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-qsfzf\" (UID: \"79aaf490-69d3-404d-9c69-e062717930a0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf" Mar 18 13:32:58.363067 master-0 kubenswrapper[28504]: I0318 13:32:58.358913 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7da48df5-5ace-4bcb-a96f-a96bea9b7657-nmstate-lock\") pod \"nmstate-handler-pl77f\" (UID: \"7da48df5-5ace-4bcb-a96f-a96bea9b7657\") " pod="openshift-nmstate/nmstate-handler-pl77f" Mar 18 13:32:58.363067 master-0 kubenswrapper[28504]: I0318 13:32:58.359000 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7da48df5-5ace-4bcb-a96f-a96bea9b7657-nmstate-lock\") pod \"nmstate-handler-pl77f\" (UID: \"7da48df5-5ace-4bcb-a96f-a96bea9b7657\") " pod="openshift-nmstate/nmstate-handler-pl77f" Mar 18 13:32:58.363067 master-0 kubenswrapper[28504]: I0318 13:32:58.359036 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7da48df5-5ace-4bcb-a96f-a96bea9b7657-ovs-socket\") pod \"nmstate-handler-pl77f\" (UID: \"7da48df5-5ace-4bcb-a96f-a96bea9b7657\") " pod="openshift-nmstate/nmstate-handler-pl77f" Mar 18 13:32:58.363067 master-0 kubenswrapper[28504]: I0318 13:32:58.359092 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7da48df5-5ace-4bcb-a96f-a96bea9b7657-dbus-socket\") pod \"nmstate-handler-pl77f\" (UID: \"7da48df5-5ace-4bcb-a96f-a96bea9b7657\") " pod="openshift-nmstate/nmstate-handler-pl77f" Mar 18 13:32:58.381485 master-0 kubenswrapper[28504]: I0318 13:32:58.379011 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpkbt\" (UniqueName: \"kubernetes.io/projected/7da48df5-5ace-4bcb-a96f-a96bea9b7657-kube-api-access-bpkbt\") pod \"nmstate-handler-pl77f\" (UID: \"7da48df5-5ace-4bcb-a96f-a96bea9b7657\") " pod="openshift-nmstate/nmstate-handler-pl77f" Mar 18 13:32:58.401721 master-0 kubenswrapper[28504]: I0318 13:32:58.400780 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m55sr" Mar 18 13:32:58.421963 master-0 kubenswrapper[28504]: I0318 13:32:58.418597 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-5snlm" Mar 18 13:32:58.451959 master-0 kubenswrapper[28504]: I0318 13:32:58.451120 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pl77f" Mar 18 13:32:58.469972 master-0 kubenswrapper[28504]: I0318 13:32:58.460094 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/79aaf490-69d3-404d-9c69-e062717930a0-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-qsfzf\" (UID: \"79aaf490-69d3-404d-9c69-e062717930a0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf" Mar 18 13:32:58.469972 master-0 kubenswrapper[28504]: I0318 13:32:58.460220 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/79aaf490-69d3-404d-9c69-e062717930a0-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-qsfzf\" (UID: \"79aaf490-69d3-404d-9c69-e062717930a0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf" Mar 18 13:32:58.469972 master-0 kubenswrapper[28504]: I0318 13:32:58.460341 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxnbl\" (UniqueName: \"kubernetes.io/projected/79aaf490-69d3-404d-9c69-e062717930a0-kube-api-access-bxnbl\") pod \"nmstate-console-plugin-86f58fcf4-qsfzf\" (UID: \"79aaf490-69d3-404d-9c69-e062717930a0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf" Mar 18 13:32:58.469972 master-0 kubenswrapper[28504]: I0318 13:32:58.461007 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-65687d4794-jn97k"] Mar 18 13:32:58.469972 master-0 kubenswrapper[28504]: E0318 13:32:58.461585 28504 secret.go:189] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 18 13:32:58.469972 master-0 kubenswrapper[28504]: E0318 13:32:58.461689 28504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79aaf490-69d3-404d-9c69-e062717930a0-plugin-serving-cert podName:79aaf490-69d3-404d-9c69-e062717930a0 nodeName:}" failed. No retries permitted until 2026-03-18 13:32:58.961663896 +0000 UTC m=+556.456469671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/79aaf490-69d3-404d-9c69-e062717930a0-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-qsfzf" (UID: "79aaf490-69d3-404d-9c69-e062717930a0") : secret "plugin-serving-cert" not found Mar 18 13:32:58.469972 master-0 kubenswrapper[28504]: I0318 13:32:58.461693 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/79aaf490-69d3-404d-9c69-e062717930a0-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-qsfzf\" (UID: \"79aaf490-69d3-404d-9c69-e062717930a0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf" Mar 18 13:32:58.469972 master-0 kubenswrapper[28504]: I0318 13:32:58.462269 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.484317 master-0 kubenswrapper[28504]: I0318 13:32:58.475874 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65687d4794-jn97k"] Mar 18 13:32:58.504782 master-0 kubenswrapper[28504]: I0318 13:32:58.504752 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxnbl\" (UniqueName: \"kubernetes.io/projected/79aaf490-69d3-404d-9c69-e062717930a0-kube-api-access-bxnbl\") pod \"nmstate-console-plugin-86f58fcf4-qsfzf\" (UID: \"79aaf490-69d3-404d-9c69-e062717930a0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf" Mar 18 13:32:58.534320 master-0 kubenswrapper[28504]: W0318 13:32:58.534255 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7da48df5_5ace_4bcb_a96f_a96bea9b7657.slice/crio-c3a8fb0dca956dfab324f6afb522b07442042a0377f7286104e19f44b3cff0ec WatchSource:0}: Error finding container c3a8fb0dca956dfab324f6afb522b07442042a0377f7286104e19f44b3cff0ec: Status 404 returned error can't find the container with id c3a8fb0dca956dfab324f6afb522b07442042a0377f7286104e19f44b3cff0ec Mar 18 13:32:58.573862 master-0 kubenswrapper[28504]: I0318 13:32:58.562084 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eae47da-e3b1-4825-bf96-a9357a912731-trusted-ca-bundle\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.573862 master-0 kubenswrapper[28504]: I0318 13:32:58.562162 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2eae47da-e3b1-4825-bf96-a9357a912731-console-oauth-config\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.573862 master-0 kubenswrapper[28504]: I0318 13:32:58.562246 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2eae47da-e3b1-4825-bf96-a9357a912731-oauth-serving-cert\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.573862 master-0 kubenswrapper[28504]: I0318 13:32:58.562286 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2eae47da-e3b1-4825-bf96-a9357a912731-service-ca\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.573862 master-0 kubenswrapper[28504]: I0318 13:32:58.562307 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2eae47da-e3b1-4825-bf96-a9357a912731-console-serving-cert\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.573862 master-0 kubenswrapper[28504]: I0318 13:32:58.562335 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2eae47da-e3b1-4825-bf96-a9357a912731-console-config\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.573862 master-0 kubenswrapper[28504]: I0318 13:32:58.562369 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f47v2\" (UniqueName: \"kubernetes.io/projected/2eae47da-e3b1-4825-bf96-a9357a912731-kube-api-access-f47v2\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.675356 master-0 kubenswrapper[28504]: I0318 13:32:58.675281 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2eae47da-e3b1-4825-bf96-a9357a912731-service-ca\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.675356 master-0 kubenswrapper[28504]: I0318 13:32:58.675345 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2eae47da-e3b1-4825-bf96-a9357a912731-console-serving-cert\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.675356 master-0 kubenswrapper[28504]: I0318 13:32:58.675393 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2eae47da-e3b1-4825-bf96-a9357a912731-console-config\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.676348 master-0 kubenswrapper[28504]: I0318 13:32:58.676325 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2eae47da-e3b1-4825-bf96-a9357a912731-console-config\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.676528 master-0 kubenswrapper[28504]: I0318 13:32:58.675471 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f47v2\" (UniqueName: \"kubernetes.io/projected/2eae47da-e3b1-4825-bf96-a9357a912731-kube-api-access-f47v2\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.676583 master-0 kubenswrapper[28504]: I0318 13:32:58.676544 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eae47da-e3b1-4825-bf96-a9357a912731-trusted-ca-bundle\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.676626 master-0 kubenswrapper[28504]: I0318 13:32:58.676582 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2eae47da-e3b1-4825-bf96-a9357a912731-console-oauth-config\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.676679 master-0 kubenswrapper[28504]: I0318 13:32:58.676656 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2eae47da-e3b1-4825-bf96-a9357a912731-oauth-serving-cert\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.676974 master-0 kubenswrapper[28504]: I0318 13:32:58.676480 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2eae47da-e3b1-4825-bf96-a9357a912731-service-ca\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.677588 master-0 kubenswrapper[28504]: I0318 13:32:58.677562 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2eae47da-e3b1-4825-bf96-a9357a912731-oauth-serving-cert\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.679068 master-0 kubenswrapper[28504]: I0318 13:32:58.679024 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eae47da-e3b1-4825-bf96-a9357a912731-trusted-ca-bundle\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.681203 master-0 kubenswrapper[28504]: I0318 13:32:58.681134 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2eae47da-e3b1-4825-bf96-a9357a912731-console-oauth-config\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.681625 master-0 kubenswrapper[28504]: I0318 13:32:58.681598 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2eae47da-e3b1-4825-bf96-a9357a912731-console-serving-cert\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.715107 master-0 kubenswrapper[28504]: I0318 13:32:58.715046 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f47v2\" (UniqueName: \"kubernetes.io/projected/2eae47da-e3b1-4825-bf96-a9357a912731-kube-api-access-f47v2\") pod \"console-65687d4794-jn97k\" (UID: \"2eae47da-e3b1-4825-bf96-a9357a912731\") " pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.806783 master-0 kubenswrapper[28504]: I0318 13:32:58.806721 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:32:58.963489 master-0 kubenswrapper[28504]: I0318 13:32:58.963438 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-5snlm"] Mar 18 13:32:58.989114 master-0 kubenswrapper[28504]: I0318 13:32:58.989033 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/79aaf490-69d3-404d-9c69-e062717930a0-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-qsfzf\" (UID: \"79aaf490-69d3-404d-9c69-e062717930a0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf" Mar 18 13:32:58.992517 master-0 kubenswrapper[28504]: I0318 13:32:58.992421 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/79aaf490-69d3-404d-9c69-e062717930a0-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-qsfzf\" (UID: \"79aaf490-69d3-404d-9c69-e062717930a0\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf" Mar 18 13:32:59.070593 master-0 kubenswrapper[28504]: I0318 13:32:59.070523 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-m55sr"] Mar 18 13:32:59.078006 master-0 kubenswrapper[28504]: W0318 13:32:59.077947 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cdd455d_0f9a_4c4c_99d3_231f0dd90d04.slice/crio-85fe8187b93e6d64cff099358171227923d80836ce2e99b8e0425ad5e177022d WatchSource:0}: Error finding container 85fe8187b93e6d64cff099358171227923d80836ce2e99b8e0425ad5e177022d: Status 404 returned error can't find the container with id 85fe8187b93e6d64cff099358171227923d80836ce2e99b8e0425ad5e177022d Mar 18 13:32:59.088449 master-0 kubenswrapper[28504]: I0318 13:32:59.088324 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-5snlm" event={"ID":"ce064a48-4cf9-4160-82f0-307c9a64733b","Type":"ContainerStarted","Data":"4da3ca52b7e3b9a25b18ecfc65358314662661d106f02815034fefb79e16a1b5"} Mar 18 13:32:59.091081 master-0 kubenswrapper[28504]: I0318 13:32:59.091010 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pl77f" event={"ID":"7da48df5-5ace-4bcb-a96f-a96bea9b7657","Type":"ContainerStarted","Data":"c3a8fb0dca956dfab324f6afb522b07442042a0377f7286104e19f44b3cff0ec"} Mar 18 13:32:59.093186 master-0 kubenswrapper[28504]: I0318 13:32:59.093139 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qbx7x" event={"ID":"4998b602-5dd9-4ce5-90ff-85e81b4d51fe","Type":"ContainerStarted","Data":"95a76545c6e74d56303e61e86b8bb9227680c616429ca1a42f7ab13ffc0e210e"} Mar 18 13:32:59.098429 master-0 kubenswrapper[28504]: I0318 13:32:59.098357 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-8v548" event={"ID":"8733982a-3ee0-4a7d-b811-9e79ce602150","Type":"ContainerStarted","Data":"afee2f593c8c9aa3ceb8371d7b8e4f104a2e4e6abe3071ab20395b1b425ee0d3"} Mar 18 13:32:59.123881 master-0 kubenswrapper[28504]: I0318 13:32:59.123738 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-8v548" podStartSLOduration=2.215349298 podStartE2EDuration="4.123680885s" podCreationTimestamp="2026-03-18 13:32:55 +0000 UTC" firstStartedPulling="2026-03-18 13:32:56.853312656 +0000 UTC m=+554.348118441" lastFinishedPulling="2026-03-18 13:32:58.761644253 +0000 UTC m=+556.256450028" observedRunningTime="2026-03-18 13:32:59.118304762 +0000 UTC m=+556.613110537" watchObservedRunningTime="2026-03-18 13:32:59.123680885 +0000 UTC m=+556.618486660" Mar 18 13:32:59.195081 master-0 kubenswrapper[28504]: I0318 13:32:59.194996 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf" Mar 18 13:32:59.525391 master-0 kubenswrapper[28504]: I0318 13:32:59.525337 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65687d4794-jn97k"] Mar 18 13:32:59.530306 master-0 kubenswrapper[28504]: W0318 13:32:59.530255 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2eae47da_e3b1_4825_bf96_a9357a912731.slice/crio-3cae6c36d5075729459e41fd0768a9c126c76357849a1408516fa2f8bab620ec WatchSource:0}: Error finding container 3cae6c36d5075729459e41fd0768a9c126c76357849a1408516fa2f8bab620ec: Status 404 returned error can't find the container with id 3cae6c36d5075729459e41fd0768a9c126c76357849a1408516fa2f8bab620ec Mar 18 13:32:59.726965 master-0 kubenswrapper[28504]: I0318 13:32:59.725380 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf"] Mar 18 13:33:00.111977 master-0 kubenswrapper[28504]: I0318 13:33:00.110221 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65687d4794-jn97k" event={"ID":"2eae47da-e3b1-4825-bf96-a9357a912731","Type":"ContainerStarted","Data":"285036ae9e1279cc97dff52f024218815039ea57c941d7f26a289dc12febf514"} Mar 18 13:33:00.111977 master-0 kubenswrapper[28504]: I0318 13:33:00.110283 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65687d4794-jn97k" event={"ID":"2eae47da-e3b1-4825-bf96-a9357a912731","Type":"ContainerStarted","Data":"3cae6c36d5075729459e41fd0768a9c126c76357849a1408516fa2f8bab620ec"} Mar 18 13:33:00.111977 master-0 kubenswrapper[28504]: I0318 13:33:00.111635 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m55sr" event={"ID":"3cdd455d-0f9a-4c4c-99d3-231f0dd90d04","Type":"ContainerStarted","Data":"85fe8187b93e6d64cff099358171227923d80836ce2e99b8e0425ad5e177022d"} Mar 18 13:33:00.115831 master-0 kubenswrapper[28504]: I0318 13:33:00.113680 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf" event={"ID":"79aaf490-69d3-404d-9c69-e062717930a0","Type":"ContainerStarted","Data":"e831318fe1491e9fc793c5d78289c9a69b6d3cc281a78e5d30eb6486dbe5354e"} Mar 18 13:33:00.115831 master-0 kubenswrapper[28504]: I0318 13:33:00.113715 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-8v548" Mar 18 13:33:01.124321 master-0 kubenswrapper[28504]: I0318 13:33:01.124246 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qbx7x" event={"ID":"4998b602-5dd9-4ce5-90ff-85e81b4d51fe","Type":"ContainerStarted","Data":"da5e1c1a6563cbea0cd3bbd184cbc6cf47f0f5fb1d6c7ad42efd469b4f788f9a"} Mar 18 13:33:01.772990 master-0 kubenswrapper[28504]: I0318 13:33:01.772839 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-65687d4794-jn97k" podStartSLOduration=3.772815902 podStartE2EDuration="3.772815902s" podCreationTimestamp="2026-03-18 13:32:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:33:00.151377449 +0000 UTC m=+557.646183234" watchObservedRunningTime="2026-03-18 13:33:01.772815902 +0000 UTC m=+559.267621667" Mar 18 13:33:01.780843 master-0 kubenswrapper[28504]: I0318 13:33:01.779097 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-qbx7x" podStartSLOduration=5.113590125 podStartE2EDuration="6.77907334s" podCreationTimestamp="2026-03-18 13:32:55 +0000 UTC" firstStartedPulling="2026-03-18 13:32:58.118243923 +0000 UTC m=+555.613049698" lastFinishedPulling="2026-03-18 13:32:59.783727118 +0000 UTC m=+557.278532913" observedRunningTime="2026-03-18 13:33:01.772668708 +0000 UTC m=+559.267474503" watchObservedRunningTime="2026-03-18 13:33:01.77907334 +0000 UTC m=+559.273879115" Mar 18 13:33:02.134977 master-0 kubenswrapper[28504]: I0318 13:33:02.134844 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-qbx7x" Mar 18 13:33:06.321451 master-0 kubenswrapper[28504]: I0318 13:33:06.321058 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-8v548" Mar 18 13:33:07.189759 master-0 kubenswrapper[28504]: I0318 13:33:07.189692 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pl77f" event={"ID":"7da48df5-5ace-4bcb-a96f-a96bea9b7657","Type":"ContainerStarted","Data":"eacb831ad7b239617666e4022948acb7c47fbc57e208ca4e9bd5bb6b6899c859"} Mar 18 13:33:07.190037 master-0 kubenswrapper[28504]: I0318 13:33:07.189780 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-pl77f" Mar 18 13:33:07.193303 master-0 kubenswrapper[28504]: I0318 13:33:07.193254 28504 generic.go:334] "Generic (PLEG): container finished" podID="eb89e2c4-b8c0-45ff-aa69-eaceb8838561" containerID="fc6ce5dd11029ae230cc4456fb43dbebd3f8113a386bed4c81d26af6884ff466" exitCode=0 Mar 18 13:33:07.193303 master-0 kubenswrapper[28504]: I0318 13:33:07.193287 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n54zk" event={"ID":"eb89e2c4-b8c0-45ff-aa69-eaceb8838561","Type":"ContainerDied","Data":"fc6ce5dd11029ae230cc4456fb43dbebd3f8113a386bed4c81d26af6884ff466"} Mar 18 13:33:07.197127 master-0 kubenswrapper[28504]: I0318 13:33:07.197091 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m55sr" event={"ID":"3cdd455d-0f9a-4c4c-99d3-231f0dd90d04","Type":"ContainerStarted","Data":"b6e2483bff3c28a39d6d73f0d316337c4b83959ab23a3fa7568422f1c481dea1"} Mar 18 13:33:07.197197 master-0 kubenswrapper[28504]: I0318 13:33:07.197135 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m55sr" event={"ID":"3cdd455d-0f9a-4c4c-99d3-231f0dd90d04","Type":"ContainerStarted","Data":"60c4225055cffbcd0b6e359582ea4e34d99d8a95bdc0b72493083bb1b8e9d97e"} Mar 18 13:33:07.204142 master-0 kubenswrapper[28504]: I0318 13:33:07.204085 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-bf9qb" event={"ID":"6fd63f11-ffae-4160-9728-05059c09ef4d","Type":"ContainerStarted","Data":"9aa5cfb0ceb230fe6fc64710fee0767ebba6d12ff51ddd959d0a4be75d52cfa4"} Mar 18 13:33:07.204294 master-0 kubenswrapper[28504]: I0318 13:33:07.204257 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-bf9qb" Mar 18 13:33:07.206033 master-0 kubenswrapper[28504]: I0318 13:33:07.205974 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf" event={"ID":"79aaf490-69d3-404d-9c69-e062717930a0","Type":"ContainerStarted","Data":"40dad4f88e2df281c33edd63de76bb4e92ce287b8f7ae2fe6af3899c06b83605"} Mar 18 13:33:07.208957 master-0 kubenswrapper[28504]: I0318 13:33:07.208871 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-5snlm" event={"ID":"ce064a48-4cf9-4160-82f0-307c9a64733b","Type":"ContainerStarted","Data":"f49c6aeb941d6e34aef3c87eacd7b45f8839ab71c703137f6dfba17b33021cd4"} Mar 18 13:33:07.209210 master-0 kubenswrapper[28504]: I0318 13:33:07.209169 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-5snlm" Mar 18 13:33:07.221980 master-0 kubenswrapper[28504]: I0318 13:33:07.221880 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-pl77f" podStartSLOduration=2.3026870329999998 podStartE2EDuration="10.22186063s" podCreationTimestamp="2026-03-18 13:32:57 +0000 UTC" firstStartedPulling="2026-03-18 13:32:58.548989358 +0000 UTC m=+556.043795133" lastFinishedPulling="2026-03-18 13:33:06.468162955 +0000 UTC m=+563.962968730" observedRunningTime="2026-03-18 13:33:07.218915346 +0000 UTC m=+564.713721131" watchObservedRunningTime="2026-03-18 13:33:07.22186063 +0000 UTC m=+564.716666425" Mar 18 13:33:07.281851 master-0 kubenswrapper[28504]: I0318 13:33:07.278089 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-bf9qb" podStartSLOduration=2.365855427 podStartE2EDuration="12.278071518s" podCreationTimestamp="2026-03-18 13:32:55 +0000 UTC" firstStartedPulling="2026-03-18 13:32:56.545657781 +0000 UTC m=+554.040463556" lastFinishedPulling="2026-03-18 13:33:06.457873872 +0000 UTC m=+563.952679647" observedRunningTime="2026-03-18 13:33:07.275531295 +0000 UTC m=+564.770337060" watchObservedRunningTime="2026-03-18 13:33:07.278071518 +0000 UTC m=+564.772877293" Mar 18 13:33:07.281851 master-0 kubenswrapper[28504]: I0318 13:33:07.280206 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qsfzf" podStartSLOduration=2.578398627 podStartE2EDuration="9.280197708s" podCreationTimestamp="2026-03-18 13:32:58 +0000 UTC" firstStartedPulling="2026-03-18 13:32:59.770733128 +0000 UTC m=+557.265538903" lastFinishedPulling="2026-03-18 13:33:06.472532209 +0000 UTC m=+563.967337984" observedRunningTime="2026-03-18 13:33:07.25212223 +0000 UTC m=+564.746928015" watchObservedRunningTime="2026-03-18 13:33:07.280197708 +0000 UTC m=+564.775003483" Mar 18 13:33:07.379117 master-0 kubenswrapper[28504]: I0318 13:33:07.378916 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-m55sr" podStartSLOduration=3.002322852 podStartE2EDuration="10.378893194s" podCreationTimestamp="2026-03-18 13:32:57 +0000 UTC" firstStartedPulling="2026-03-18 13:32:59.081612939 +0000 UTC m=+556.576418704" lastFinishedPulling="2026-03-18 13:33:06.458183271 +0000 UTC m=+563.952989046" observedRunningTime="2026-03-18 13:33:07.333753401 +0000 UTC m=+564.828559176" watchObservedRunningTime="2026-03-18 13:33:07.378893194 +0000 UTC m=+564.873698979" Mar 18 13:33:07.382998 master-0 kubenswrapper[28504]: I0318 13:33:07.382909 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-5snlm" podStartSLOduration=2.886106598 podStartE2EDuration="10.382890127s" podCreationTimestamp="2026-03-18 13:32:57 +0000 UTC" firstStartedPulling="2026-03-18 13:32:58.9739973 +0000 UTC m=+556.468803075" lastFinishedPulling="2026-03-18 13:33:06.470780829 +0000 UTC m=+563.965586604" observedRunningTime="2026-03-18 13:33:07.372530353 +0000 UTC m=+564.867336128" watchObservedRunningTime="2026-03-18 13:33:07.382890127 +0000 UTC m=+564.877695902" Mar 18 13:33:08.228966 master-0 kubenswrapper[28504]: I0318 13:33:08.228314 28504 generic.go:334] "Generic (PLEG): container finished" podID="eb89e2c4-b8c0-45ff-aa69-eaceb8838561" containerID="0109b96117bc037b6134a04529cb2292c62e839e4e0bf1bc5b90a823ece6e943" exitCode=0 Mar 18 13:33:08.228966 master-0 kubenswrapper[28504]: I0318 13:33:08.228452 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n54zk" event={"ID":"eb89e2c4-b8c0-45ff-aa69-eaceb8838561","Type":"ContainerDied","Data":"0109b96117bc037b6134a04529cb2292c62e839e4e0bf1bc5b90a823ece6e943"} Mar 18 13:33:08.807832 master-0 kubenswrapper[28504]: I0318 13:33:08.807791 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:33:08.808458 master-0 kubenswrapper[28504]: I0318 13:33:08.808149 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:33:08.814297 master-0 kubenswrapper[28504]: I0318 13:33:08.814231 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:33:09.237462 master-0 kubenswrapper[28504]: I0318 13:33:09.237391 28504 generic.go:334] "Generic (PLEG): container finished" podID="eb89e2c4-b8c0-45ff-aa69-eaceb8838561" containerID="67e109f5f54572e58a19713bde181cce9a973e1910c7b4f58212185873c56d48" exitCode=0 Mar 18 13:33:09.237722 master-0 kubenswrapper[28504]: I0318 13:33:09.237520 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n54zk" event={"ID":"eb89e2c4-b8c0-45ff-aa69-eaceb8838561","Type":"ContainerDied","Data":"67e109f5f54572e58a19713bde181cce9a973e1910c7b4f58212185873c56d48"} Mar 18 13:33:09.242121 master-0 kubenswrapper[28504]: I0318 13:33:09.242081 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-65687d4794-jn97k" Mar 18 13:33:09.390986 master-0 kubenswrapper[28504]: I0318 13:33:09.390896 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7bb78b6b94-7nxcq"] Mar 18 13:33:10.251012 master-0 kubenswrapper[28504]: I0318 13:33:10.250958 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n54zk" event={"ID":"eb89e2c4-b8c0-45ff-aa69-eaceb8838561","Type":"ContainerStarted","Data":"cdfbae81d2433d38465bcf725aeea72ce2edb7be1323a9f6fe499780d11e0f4a"} Mar 18 13:33:10.251012 master-0 kubenswrapper[28504]: I0318 13:33:10.251014 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n54zk" event={"ID":"eb89e2c4-b8c0-45ff-aa69-eaceb8838561","Type":"ContainerStarted","Data":"2b7bae6ded61a73dcf66e21b3fe70c70bb5e5ceb4968b285e95998fa049b57c3"} Mar 18 13:33:10.262422 master-0 kubenswrapper[28504]: I0318 13:33:10.251026 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n54zk" event={"ID":"eb89e2c4-b8c0-45ff-aa69-eaceb8838561","Type":"ContainerStarted","Data":"739daede3fb6b6c4400d81874c58c4346242a3310063ddd98fea1b9aefdc79b6"} Mar 18 13:33:10.262422 master-0 kubenswrapper[28504]: I0318 13:33:10.251037 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n54zk" event={"ID":"eb89e2c4-b8c0-45ff-aa69-eaceb8838561","Type":"ContainerStarted","Data":"b9b94dcbb34ebab0411cba1fd28d1db2d2994f02a0d9f3ced3e4e0461e27a579"} Mar 18 13:33:10.262422 master-0 kubenswrapper[28504]: I0318 13:33:10.251045 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n54zk" event={"ID":"eb89e2c4-b8c0-45ff-aa69-eaceb8838561","Type":"ContainerStarted","Data":"c0c990935bca2cc842234415952c64c78cc480c0ffeecb9fbf7933a06c25faaa"} Mar 18 13:33:11.266005 master-0 kubenswrapper[28504]: I0318 13:33:11.265948 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n54zk" event={"ID":"eb89e2c4-b8c0-45ff-aa69-eaceb8838561","Type":"ContainerStarted","Data":"a10ed9a84e8235f89bdfae5a37d8025a574ca5dbc4720af90866b9d5599a2597"} Mar 18 13:33:11.266779 master-0 kubenswrapper[28504]: I0318 13:33:11.266752 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-n54zk" Mar 18 13:33:11.293569 master-0 kubenswrapper[28504]: I0318 13:33:11.293484 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-n54zk" podStartSLOduration=6.665256674 podStartE2EDuration="16.293462992s" podCreationTimestamp="2026-03-18 13:32:55 +0000 UTC" firstStartedPulling="2026-03-18 13:32:56.858493153 +0000 UTC m=+554.353298928" lastFinishedPulling="2026-03-18 13:33:06.486699471 +0000 UTC m=+563.981505246" observedRunningTime="2026-03-18 13:33:11.288424329 +0000 UTC m=+568.783230114" watchObservedRunningTime="2026-03-18 13:33:11.293462992 +0000 UTC m=+568.788268767" Mar 18 13:33:11.743341 master-0 kubenswrapper[28504]: I0318 13:33:11.743279 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-n54zk" Mar 18 13:33:11.784084 master-0 kubenswrapper[28504]: I0318 13:33:11.784027 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-n54zk" Mar 18 13:33:13.476386 master-0 kubenswrapper[28504]: I0318 13:33:13.476280 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-pl77f" Mar 18 13:33:16.117876 master-0 kubenswrapper[28504]: I0318 13:33:16.117805 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-bf9qb" Mar 18 13:33:17.805117 master-0 kubenswrapper[28504]: I0318 13:33:17.805032 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-qbx7x" Mar 18 13:33:18.425054 master-0 kubenswrapper[28504]: I0318 13:33:18.424977 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-5snlm" Mar 18 13:33:23.994848 master-0 kubenswrapper[28504]: I0318 13:33:23.994776 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-p2zgz"] Mar 18 13:33:23.996101 master-0 kubenswrapper[28504]: I0318 13:33:23.996067 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:23.999180 master-0 kubenswrapper[28504]: I0318 13:33:23.999136 28504 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 18 13:33:24.001688 master-0 kubenswrapper[28504]: I0318 13:33:24.001630 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-p2zgz"] Mar 18 13:33:24.179477 master-0 kubenswrapper[28504]: I0318 13:33:24.179409 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-registration-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.179477 master-0 kubenswrapper[28504]: I0318 13:33:24.179477 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-file-lock-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.179770 master-0 kubenswrapper[28504]: I0318 13:33:24.179522 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-device-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.179770 master-0 kubenswrapper[28504]: I0318 13:33:24.179539 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-metrics-cert\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.179770 master-0 kubenswrapper[28504]: I0318 13:33:24.179564 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs4lp\" (UniqueName: \"kubernetes.io/projected/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-kube-api-access-rs4lp\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.179770 master-0 kubenswrapper[28504]: I0318 13:33:24.179585 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-sys\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.179770 master-0 kubenswrapper[28504]: I0318 13:33:24.179611 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-csi-plugin-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.179770 master-0 kubenswrapper[28504]: I0318 13:33:24.179656 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-run-udev\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.179770 master-0 kubenswrapper[28504]: I0318 13:33:24.179735 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-node-plugin-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.180032 master-0 kubenswrapper[28504]: I0318 13:33:24.179801 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-lvmd-config\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.180032 master-0 kubenswrapper[28504]: I0318 13:33:24.179988 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-pod-volumes-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.281577 master-0 kubenswrapper[28504]: I0318 13:33:24.281453 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-device-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.281577 master-0 kubenswrapper[28504]: I0318 13:33:24.281512 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-metrics-cert\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.281577 master-0 kubenswrapper[28504]: I0318 13:33:24.281541 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs4lp\" (UniqueName: \"kubernetes.io/projected/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-kube-api-access-rs4lp\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.281577 master-0 kubenswrapper[28504]: I0318 13:33:24.281577 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-sys\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.281577 master-0 kubenswrapper[28504]: I0318 13:33:24.281575 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-device-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.281911 master-0 kubenswrapper[28504]: I0318 13:33:24.281608 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-csi-plugin-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.281911 master-0 kubenswrapper[28504]: I0318 13:33:24.281640 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-run-udev\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.281911 master-0 kubenswrapper[28504]: I0318 13:33:24.281652 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-sys\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.281911 master-0 kubenswrapper[28504]: I0318 13:33:24.281789 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-run-udev\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.281911 master-0 kubenswrapper[28504]: I0318 13:33:24.281791 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-node-plugin-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.281911 master-0 kubenswrapper[28504]: I0318 13:33:24.281829 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-lvmd-config\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.281911 master-0 kubenswrapper[28504]: I0318 13:33:24.281888 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-pod-volumes-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.282227 master-0 kubenswrapper[28504]: I0318 13:33:24.281931 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-registration-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.282227 master-0 kubenswrapper[28504]: I0318 13:33:24.281979 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-file-lock-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.282227 master-0 kubenswrapper[28504]: I0318 13:33:24.281987 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-csi-plugin-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.282227 master-0 kubenswrapper[28504]: I0318 13:33:24.282047 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-node-plugin-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.282227 master-0 kubenswrapper[28504]: I0318 13:33:24.282123 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-pod-volumes-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.282388 master-0 kubenswrapper[28504]: I0318 13:33:24.282229 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-registration-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.282388 master-0 kubenswrapper[28504]: I0318 13:33:24.282258 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-lvmd-config\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.282388 master-0 kubenswrapper[28504]: I0318 13:33:24.282295 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-file-lock-dir\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.287299 master-0 kubenswrapper[28504]: I0318 13:33:24.287268 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-metrics-cert\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.299968 master-0 kubenswrapper[28504]: I0318 13:33:24.298225 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs4lp\" (UniqueName: \"kubernetes.io/projected/d8d1ffef-c93c-4a17-a978-9d3dd6896ff2-kube-api-access-rs4lp\") pod \"vg-manager-p2zgz\" (UID: \"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2\") " pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.324612 master-0 kubenswrapper[28504]: I0318 13:33:24.324550 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:24.774059 master-0 kubenswrapper[28504]: W0318 13:33:24.767485 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8d1ffef_c93c_4a17_a978_9d3dd6896ff2.slice/crio-97235eb6a66c2ce23155c2626f5dba1a414167b900545d9bb666f52ab69fdc68 WatchSource:0}: Error finding container 97235eb6a66c2ce23155c2626f5dba1a414167b900545d9bb666f52ab69fdc68: Status 404 returned error can't find the container with id 97235eb6a66c2ce23155c2626f5dba1a414167b900545d9bb666f52ab69fdc68 Mar 18 13:33:24.774059 master-0 kubenswrapper[28504]: I0318 13:33:24.769490 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-p2zgz"] Mar 18 13:33:25.394111 master-0 kubenswrapper[28504]: I0318 13:33:25.393994 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-p2zgz" event={"ID":"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2","Type":"ContainerStarted","Data":"3f50cf4be2f35baf64785472b070606aed2953dd9be8ad9aeafb495c9a3e8e4c"} Mar 18 13:33:25.394111 master-0 kubenswrapper[28504]: I0318 13:33:25.394073 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-p2zgz" event={"ID":"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2","Type":"ContainerStarted","Data":"97235eb6a66c2ce23155c2626f5dba1a414167b900545d9bb666f52ab69fdc68"} Mar 18 13:33:25.429138 master-0 kubenswrapper[28504]: I0318 13:33:25.429031 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-p2zgz" podStartSLOduration=2.429009547 podStartE2EDuration="2.429009547s" podCreationTimestamp="2026-03-18 13:33:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:33:25.42243608 +0000 UTC m=+582.917241875" watchObservedRunningTime="2026-03-18 13:33:25.429009547 +0000 UTC m=+582.923815322" Mar 18 13:33:26.761141 master-0 kubenswrapper[28504]: I0318 13:33:26.761088 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-n54zk" Mar 18 13:33:27.417673 master-0 kubenswrapper[28504]: I0318 13:33:27.417580 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-p2zgz_d8d1ffef-c93c-4a17-a978-9d3dd6896ff2/vg-manager/0.log" Mar 18 13:33:27.417921 master-0 kubenswrapper[28504]: I0318 13:33:27.417897 28504 generic.go:334] "Generic (PLEG): container finished" podID="d8d1ffef-c93c-4a17-a978-9d3dd6896ff2" containerID="3f50cf4be2f35baf64785472b070606aed2953dd9be8ad9aeafb495c9a3e8e4c" exitCode=1 Mar 18 13:33:27.418045 master-0 kubenswrapper[28504]: I0318 13:33:27.418001 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-p2zgz" event={"ID":"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2","Type":"ContainerDied","Data":"3f50cf4be2f35baf64785472b070606aed2953dd9be8ad9aeafb495c9a3e8e4c"} Mar 18 13:33:27.418708 master-0 kubenswrapper[28504]: I0318 13:33:27.418680 28504 scope.go:117] "RemoveContainer" containerID="3f50cf4be2f35baf64785472b070606aed2953dd9be8ad9aeafb495c9a3e8e4c" Mar 18 13:33:27.767960 master-0 kubenswrapper[28504]: I0318 13:33:27.767023 28504 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 18 13:33:28.329861 master-0 kubenswrapper[28504]: I0318 13:33:28.329724 28504 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-18T13:33:27.767054969Z","Handler":null,"Name":""} Mar 18 13:33:28.345965 master-0 kubenswrapper[28504]: I0318 13:33:28.341234 28504 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 18 13:33:28.345965 master-0 kubenswrapper[28504]: I0318 13:33:28.341277 28504 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 18 13:33:28.470130 master-0 kubenswrapper[28504]: I0318 13:33:28.470082 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-p2zgz_d8d1ffef-c93c-4a17-a978-9d3dd6896ff2/vg-manager/0.log" Mar 18 13:33:28.470376 master-0 kubenswrapper[28504]: I0318 13:33:28.470147 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-p2zgz" event={"ID":"d8d1ffef-c93c-4a17-a978-9d3dd6896ff2","Type":"ContainerStarted","Data":"b79bd605f8d18583721cb820969e2f81c2b58b42ef54e17f5bd81d971247cbf5"} Mar 18 13:33:34.324974 master-0 kubenswrapper[28504]: I0318 13:33:34.324838 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:34.327902 master-0 kubenswrapper[28504]: I0318 13:33:34.327828 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:34.443144 master-0 kubenswrapper[28504]: I0318 13:33:34.443002 28504 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7bb78b6b94-7nxcq" podUID="e007d827-7949-4726-a68f-53cbb78268f9" containerName="console" containerID="cri-o://c12044514fbab207bd5acc8f447efb8af77358d7071a8006627b064412b4597a" gracePeriod=15 Mar 18 13:33:34.522529 master-0 kubenswrapper[28504]: I0318 13:33:34.522455 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:34.523783 master-0 kubenswrapper[28504]: I0318 13:33:34.523745 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-p2zgz" Mar 18 13:33:35.088365 master-0 kubenswrapper[28504]: I0318 13:33:35.088311 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7bb78b6b94-7nxcq_e007d827-7949-4726-a68f-53cbb78268f9/console/0.log" Mar 18 13:33:35.088629 master-0 kubenswrapper[28504]: I0318 13:33:35.088490 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:33:35.120612 master-0 kubenswrapper[28504]: I0318 13:33:35.120552 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-oauth-serving-cert\") pod \"e007d827-7949-4726-a68f-53cbb78268f9\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " Mar 18 13:33:35.120818 master-0 kubenswrapper[28504]: I0318 13:33:35.120699 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e007d827-7949-4726-a68f-53cbb78268f9-console-serving-cert\") pod \"e007d827-7949-4726-a68f-53cbb78268f9\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " Mar 18 13:33:35.120818 master-0 kubenswrapper[28504]: I0318 13:33:35.120719 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-service-ca\") pod \"e007d827-7949-4726-a68f-53cbb78268f9\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " Mar 18 13:33:35.120818 master-0 kubenswrapper[28504]: I0318 13:33:35.120746 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-console-config\") pod \"e007d827-7949-4726-a68f-53cbb78268f9\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " Mar 18 13:33:35.120818 master-0 kubenswrapper[28504]: I0318 13:33:35.120775 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e007d827-7949-4726-a68f-53cbb78268f9-console-oauth-config\") pod \"e007d827-7949-4726-a68f-53cbb78268f9\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " Mar 18 13:33:35.120962 master-0 kubenswrapper[28504]: I0318 13:33:35.120829 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-trusted-ca-bundle\") pod \"e007d827-7949-4726-a68f-53cbb78268f9\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " Mar 18 13:33:35.120962 master-0 kubenswrapper[28504]: I0318 13:33:35.120856 28504 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7cf7\" (UniqueName: \"kubernetes.io/projected/e007d827-7949-4726-a68f-53cbb78268f9-kube-api-access-c7cf7\") pod \"e007d827-7949-4726-a68f-53cbb78268f9\" (UID: \"e007d827-7949-4726-a68f-53cbb78268f9\") " Mar 18 13:33:35.121057 master-0 kubenswrapper[28504]: I0318 13:33:35.121013 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e007d827-7949-4726-a68f-53cbb78268f9" (UID: "e007d827-7949-4726-a68f-53cbb78268f9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:33:35.121349 master-0 kubenswrapper[28504]: I0318 13:33:35.121332 28504 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:33:35.121349 master-0 kubenswrapper[28504]: I0318 13:33:35.121332 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-service-ca" (OuterVolumeSpecName: "service-ca") pod "e007d827-7949-4726-a68f-53cbb78268f9" (UID: "e007d827-7949-4726-a68f-53cbb78268f9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:33:35.121728 master-0 kubenswrapper[28504]: I0318 13:33:35.121705 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e007d827-7949-4726-a68f-53cbb78268f9" (UID: "e007d827-7949-4726-a68f-53cbb78268f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:33:35.123148 master-0 kubenswrapper[28504]: I0318 13:33:35.123075 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-console-config" (OuterVolumeSpecName: "console-config") pod "e007d827-7949-4726-a68f-53cbb78268f9" (UID: "e007d827-7949-4726-a68f-53cbb78268f9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:33:35.129657 master-0 kubenswrapper[28504]: I0318 13:33:35.129594 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e007d827-7949-4726-a68f-53cbb78268f9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e007d827-7949-4726-a68f-53cbb78268f9" (UID: "e007d827-7949-4726-a68f-53cbb78268f9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:33:35.135259 master-0 kubenswrapper[28504]: I0318 13:33:35.135167 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e007d827-7949-4726-a68f-53cbb78268f9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e007d827-7949-4726-a68f-53cbb78268f9" (UID: "e007d827-7949-4726-a68f-53cbb78268f9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:33:35.135526 master-0 kubenswrapper[28504]: I0318 13:33:35.135457 28504 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e007d827-7949-4726-a68f-53cbb78268f9-kube-api-access-c7cf7" (OuterVolumeSpecName: "kube-api-access-c7cf7") pod "e007d827-7949-4726-a68f-53cbb78268f9" (UID: "e007d827-7949-4726-a68f-53cbb78268f9"). InnerVolumeSpecName "kube-api-access-c7cf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:33:35.231971 master-0 kubenswrapper[28504]: I0318 13:33:35.226021 28504 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e007d827-7949-4726-a68f-53cbb78268f9-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:33:35.231971 master-0 kubenswrapper[28504]: I0318 13:33:35.226085 28504 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:33:35.231971 master-0 kubenswrapper[28504]: I0318 13:33:35.226099 28504 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:33:35.231971 master-0 kubenswrapper[28504]: I0318 13:33:35.226112 28504 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e007d827-7949-4726-a68f-53cbb78268f9-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:33:35.231971 master-0 kubenswrapper[28504]: I0318 13:33:35.226134 28504 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e007d827-7949-4726-a68f-53cbb78268f9-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:33:35.231971 master-0 kubenswrapper[28504]: I0318 13:33:35.226148 28504 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7cf7\" (UniqueName: \"kubernetes.io/projected/e007d827-7949-4726-a68f-53cbb78268f9-kube-api-access-c7cf7\") on node \"master-0\" DevicePath \"\"" Mar 18 13:33:35.531461 master-0 kubenswrapper[28504]: I0318 13:33:35.531374 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7bb78b6b94-7nxcq_e007d827-7949-4726-a68f-53cbb78268f9/console/0.log" Mar 18 13:33:35.531461 master-0 kubenswrapper[28504]: I0318 13:33:35.531443 28504 generic.go:334] "Generic (PLEG): container finished" podID="e007d827-7949-4726-a68f-53cbb78268f9" containerID="c12044514fbab207bd5acc8f447efb8af77358d7071a8006627b064412b4597a" exitCode=2 Mar 18 13:33:35.532381 master-0 kubenswrapper[28504]: I0318 13:33:35.531515 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb78b6b94-7nxcq" event={"ID":"e007d827-7949-4726-a68f-53cbb78268f9","Type":"ContainerDied","Data":"c12044514fbab207bd5acc8f447efb8af77358d7071a8006627b064412b4597a"} Mar 18 13:33:35.532381 master-0 kubenswrapper[28504]: I0318 13:33:35.531582 28504 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb78b6b94-7nxcq" Mar 18 13:33:35.532381 master-0 kubenswrapper[28504]: I0318 13:33:35.531620 28504 scope.go:117] "RemoveContainer" containerID="c12044514fbab207bd5acc8f447efb8af77358d7071a8006627b064412b4597a" Mar 18 13:33:35.532381 master-0 kubenswrapper[28504]: I0318 13:33:35.531599 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb78b6b94-7nxcq" event={"ID":"e007d827-7949-4726-a68f-53cbb78268f9","Type":"ContainerDied","Data":"671d2de22f61e4fce6ac7029f8005fb417033e767cfbcafb13b50eacaa0e186e"} Mar 18 13:33:35.549924 master-0 kubenswrapper[28504]: I0318 13:33:35.549861 28504 scope.go:117] "RemoveContainer" containerID="c12044514fbab207bd5acc8f447efb8af77358d7071a8006627b064412b4597a" Mar 18 13:33:35.550883 master-0 kubenswrapper[28504]: E0318 13:33:35.550831 28504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c12044514fbab207bd5acc8f447efb8af77358d7071a8006627b064412b4597a\": container with ID starting with c12044514fbab207bd5acc8f447efb8af77358d7071a8006627b064412b4597a not found: ID does not exist" containerID="c12044514fbab207bd5acc8f447efb8af77358d7071a8006627b064412b4597a" Mar 18 13:33:35.550976 master-0 kubenswrapper[28504]: I0318 13:33:35.550874 28504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c12044514fbab207bd5acc8f447efb8af77358d7071a8006627b064412b4597a"} err="failed to get container status \"c12044514fbab207bd5acc8f447efb8af77358d7071a8006627b064412b4597a\": rpc error: code = NotFound desc = could not find container \"c12044514fbab207bd5acc8f447efb8af77358d7071a8006627b064412b4597a\": container with ID starting with c12044514fbab207bd5acc8f447efb8af77358d7071a8006627b064412b4597a not found: ID does not exist" Mar 18 13:33:35.570519 master-0 kubenswrapper[28504]: I0318 13:33:35.570435 28504 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7bb78b6b94-7nxcq"] Mar 18 13:33:35.576889 master-0 kubenswrapper[28504]: I0318 13:33:35.576810 28504 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7bb78b6b94-7nxcq"] Mar 18 13:33:36.674337 master-0 kubenswrapper[28504]: I0318 13:33:36.672632 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sp2k4"] Mar 18 13:33:36.674337 master-0 kubenswrapper[28504]: E0318 13:33:36.673149 28504 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e007d827-7949-4726-a68f-53cbb78268f9" containerName="console" Mar 18 13:33:36.674337 master-0 kubenswrapper[28504]: I0318 13:33:36.673168 28504 state_mem.go:107] "Deleted CPUSet assignment" podUID="e007d827-7949-4726-a68f-53cbb78268f9" containerName="console" Mar 18 13:33:36.674337 master-0 kubenswrapper[28504]: I0318 13:33:36.673387 28504 memory_manager.go:354] "RemoveStaleState removing state" podUID="e007d827-7949-4726-a68f-53cbb78268f9" containerName="console" Mar 18 13:33:36.674337 master-0 kubenswrapper[28504]: I0318 13:33:36.674184 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sp2k4" Mar 18 13:33:36.684853 master-0 kubenswrapper[28504]: I0318 13:33:36.683904 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 18 13:33:36.684853 master-0 kubenswrapper[28504]: I0318 13:33:36.684225 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 18 13:33:36.709879 master-0 kubenswrapper[28504]: I0318 13:33:36.708800 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sp2k4"] Mar 18 13:33:36.774319 master-0 kubenswrapper[28504]: I0318 13:33:36.774269 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq8n2\" (UniqueName: \"kubernetes.io/projected/5a45c961-12d7-456a-b926-6277cbcdcc1d-kube-api-access-vq8n2\") pod \"openstack-operator-index-sp2k4\" (UID: \"5a45c961-12d7-456a-b926-6277cbcdcc1d\") " pod="openstack-operators/openstack-operator-index-sp2k4" Mar 18 13:33:36.787416 master-0 kubenswrapper[28504]: I0318 13:33:36.785930 28504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e007d827-7949-4726-a68f-53cbb78268f9" path="/var/lib/kubelet/pods/e007d827-7949-4726-a68f-53cbb78268f9/volumes" Mar 18 13:33:36.876819 master-0 kubenswrapper[28504]: I0318 13:33:36.876742 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq8n2\" (UniqueName: \"kubernetes.io/projected/5a45c961-12d7-456a-b926-6277cbcdcc1d-kube-api-access-vq8n2\") pod \"openstack-operator-index-sp2k4\" (UID: \"5a45c961-12d7-456a-b926-6277cbcdcc1d\") " pod="openstack-operators/openstack-operator-index-sp2k4" Mar 18 13:33:36.898498 master-0 kubenswrapper[28504]: I0318 13:33:36.898430 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq8n2\" (UniqueName: \"kubernetes.io/projected/5a45c961-12d7-456a-b926-6277cbcdcc1d-kube-api-access-vq8n2\") pod \"openstack-operator-index-sp2k4\" (UID: \"5a45c961-12d7-456a-b926-6277cbcdcc1d\") " pod="openstack-operators/openstack-operator-index-sp2k4" Mar 18 13:33:37.073979 master-0 kubenswrapper[28504]: I0318 13:33:37.073883 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sp2k4" Mar 18 13:33:37.541690 master-0 kubenswrapper[28504]: W0318 13:33:37.541641 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a45c961_12d7_456a_b926_6277cbcdcc1d.slice/crio-86f78383b4b095da0207c4a9947a531becfe5eec31764de932ef623548ecbe5a WatchSource:0}: Error finding container 86f78383b4b095da0207c4a9947a531becfe5eec31764de932ef623548ecbe5a: Status 404 returned error can't find the container with id 86f78383b4b095da0207c4a9947a531becfe5eec31764de932ef623548ecbe5a Mar 18 13:33:37.544307 master-0 kubenswrapper[28504]: I0318 13:33:37.544244 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sp2k4"] Mar 18 13:33:38.558043 master-0 kubenswrapper[28504]: I0318 13:33:38.557959 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sp2k4" event={"ID":"5a45c961-12d7-456a-b926-6277cbcdcc1d","Type":"ContainerStarted","Data":"86f78383b4b095da0207c4a9947a531becfe5eec31764de932ef623548ecbe5a"} Mar 18 13:33:39.568596 master-0 kubenswrapper[28504]: I0318 13:33:39.568445 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sp2k4" event={"ID":"5a45c961-12d7-456a-b926-6277cbcdcc1d","Type":"ContainerStarted","Data":"574a334ab3f8755fd4c7c031162a1a4a9eb40039bb031f9c22a78fef09f26a85"} Mar 18 13:33:39.588716 master-0 kubenswrapper[28504]: I0318 13:33:39.588628 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sp2k4" podStartSLOduration=1.8718874429999999 podStartE2EDuration="3.588606384s" podCreationTimestamp="2026-03-18 13:33:36 +0000 UTC" firstStartedPulling="2026-03-18 13:33:37.545121404 +0000 UTC m=+595.039927179" lastFinishedPulling="2026-03-18 13:33:39.261840345 +0000 UTC m=+596.756646120" observedRunningTime="2026-03-18 13:33:39.585540167 +0000 UTC m=+597.080345952" watchObservedRunningTime="2026-03-18 13:33:39.588606384 +0000 UTC m=+597.083412159" Mar 18 13:33:47.075108 master-0 kubenswrapper[28504]: I0318 13:33:47.075042 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-sp2k4" Mar 18 13:33:47.075108 master-0 kubenswrapper[28504]: I0318 13:33:47.075095 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-sp2k4" Mar 18 13:33:47.108964 master-0 kubenswrapper[28504]: I0318 13:33:47.108849 28504 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-sp2k4" Mar 18 13:33:47.673892 master-0 kubenswrapper[28504]: I0318 13:33:47.673835 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-sp2k4" Mar 18 13:34:05.911462 master-0 kubenswrapper[28504]: I0318 13:34:05.911358 28504 scope.go:117] "RemoveContainer" containerID="08b274aeaf9abbd5f8e5365d511a8523a672bf472c4f314741ea06a6ce223aa8" Mar 18 13:38:48.091130 master-0 kubenswrapper[28504]: I0318 13:38:48.091038 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-95sj7/must-gather-htdcq"] Mar 18 13:38:48.116558 master-0 kubenswrapper[28504]: I0318 13:38:48.100507 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-95sj7/must-gather-htdcq" Mar 18 13:38:48.126606 master-0 kubenswrapper[28504]: I0318 13:38:48.119818 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-95sj7/must-gather-mvhj2"] Mar 18 13:38:48.126606 master-0 kubenswrapper[28504]: I0318 13:38:48.121683 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-95sj7/must-gather-mvhj2" Mar 18 13:38:48.126606 master-0 kubenswrapper[28504]: I0318 13:38:48.125436 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-95sj7/must-gather-mvhj2"] Mar 18 13:38:48.137737 master-0 kubenswrapper[28504]: I0318 13:38:48.137328 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-95sj7/must-gather-htdcq"] Mar 18 13:38:48.146169 master-0 kubenswrapper[28504]: I0318 13:38:48.139565 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-95sj7"/"openshift-service-ca.crt" Mar 18 13:38:48.146169 master-0 kubenswrapper[28504]: I0318 13:38:48.139570 28504 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-95sj7"/"kube-root-ca.crt" Mar 18 13:38:48.206118 master-0 kubenswrapper[28504]: I0318 13:38:48.205576 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6ce65e0f-bf0c-44d6-bb96-c2f859338310-must-gather-output\") pod \"must-gather-htdcq\" (UID: \"6ce65e0f-bf0c-44d6-bb96-c2f859338310\") " pod="openshift-must-gather-95sj7/must-gather-htdcq" Mar 18 13:38:48.206118 master-0 kubenswrapper[28504]: I0318 13:38:48.205999 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qffx\" (UniqueName: \"kubernetes.io/projected/6ce65e0f-bf0c-44d6-bb96-c2f859338310-kube-api-access-8qffx\") pod \"must-gather-htdcq\" (UID: \"6ce65e0f-bf0c-44d6-bb96-c2f859338310\") " pod="openshift-must-gather-95sj7/must-gather-htdcq" Mar 18 13:38:48.308232 master-0 kubenswrapper[28504]: I0318 13:38:48.308154 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk9sl\" (UniqueName: \"kubernetes.io/projected/6074de49-4e18-42db-8345-f8d9af014737-kube-api-access-fk9sl\") pod \"must-gather-mvhj2\" (UID: \"6074de49-4e18-42db-8345-f8d9af014737\") " pod="openshift-must-gather-95sj7/must-gather-mvhj2" Mar 18 13:38:48.308232 master-0 kubenswrapper[28504]: I0318 13:38:48.308232 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6ce65e0f-bf0c-44d6-bb96-c2f859338310-must-gather-output\") pod \"must-gather-htdcq\" (UID: \"6ce65e0f-bf0c-44d6-bb96-c2f859338310\") " pod="openshift-must-gather-95sj7/must-gather-htdcq" Mar 18 13:38:48.308569 master-0 kubenswrapper[28504]: I0318 13:38:48.308394 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qffx\" (UniqueName: \"kubernetes.io/projected/6ce65e0f-bf0c-44d6-bb96-c2f859338310-kube-api-access-8qffx\") pod \"must-gather-htdcq\" (UID: \"6ce65e0f-bf0c-44d6-bb96-c2f859338310\") " pod="openshift-must-gather-95sj7/must-gather-htdcq" Mar 18 13:38:48.308711 master-0 kubenswrapper[28504]: I0318 13:38:48.308665 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6074de49-4e18-42db-8345-f8d9af014737-must-gather-output\") pod \"must-gather-mvhj2\" (UID: \"6074de49-4e18-42db-8345-f8d9af014737\") " pod="openshift-must-gather-95sj7/must-gather-mvhj2" Mar 18 13:38:48.308774 master-0 kubenswrapper[28504]: I0318 13:38:48.308739 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6ce65e0f-bf0c-44d6-bb96-c2f859338310-must-gather-output\") pod \"must-gather-htdcq\" (UID: \"6ce65e0f-bf0c-44d6-bb96-c2f859338310\") " pod="openshift-must-gather-95sj7/must-gather-htdcq" Mar 18 13:38:48.325858 master-0 kubenswrapper[28504]: I0318 13:38:48.325802 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qffx\" (UniqueName: \"kubernetes.io/projected/6ce65e0f-bf0c-44d6-bb96-c2f859338310-kube-api-access-8qffx\") pod \"must-gather-htdcq\" (UID: \"6ce65e0f-bf0c-44d6-bb96-c2f859338310\") " pod="openshift-must-gather-95sj7/must-gather-htdcq" Mar 18 13:38:48.410344 master-0 kubenswrapper[28504]: I0318 13:38:48.410211 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk9sl\" (UniqueName: \"kubernetes.io/projected/6074de49-4e18-42db-8345-f8d9af014737-kube-api-access-fk9sl\") pod \"must-gather-mvhj2\" (UID: \"6074de49-4e18-42db-8345-f8d9af014737\") " pod="openshift-must-gather-95sj7/must-gather-mvhj2" Mar 18 13:38:48.410565 master-0 kubenswrapper[28504]: I0318 13:38:48.410379 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6074de49-4e18-42db-8345-f8d9af014737-must-gather-output\") pod \"must-gather-mvhj2\" (UID: \"6074de49-4e18-42db-8345-f8d9af014737\") " pod="openshift-must-gather-95sj7/must-gather-mvhj2" Mar 18 13:38:48.411189 master-0 kubenswrapper[28504]: I0318 13:38:48.411139 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6074de49-4e18-42db-8345-f8d9af014737-must-gather-output\") pod \"must-gather-mvhj2\" (UID: \"6074de49-4e18-42db-8345-f8d9af014737\") " pod="openshift-must-gather-95sj7/must-gather-mvhj2" Mar 18 13:38:48.428209 master-0 kubenswrapper[28504]: I0318 13:38:48.428150 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk9sl\" (UniqueName: \"kubernetes.io/projected/6074de49-4e18-42db-8345-f8d9af014737-kube-api-access-fk9sl\") pod \"must-gather-mvhj2\" (UID: \"6074de49-4e18-42db-8345-f8d9af014737\") " pod="openshift-must-gather-95sj7/must-gather-mvhj2" Mar 18 13:38:48.472085 master-0 kubenswrapper[28504]: I0318 13:38:48.472019 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-95sj7/must-gather-htdcq" Mar 18 13:38:48.490960 master-0 kubenswrapper[28504]: I0318 13:38:48.490865 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-95sj7/must-gather-mvhj2" Mar 18 13:38:49.231077 master-0 kubenswrapper[28504]: I0318 13:38:49.231002 28504 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 13:38:49.236026 master-0 kubenswrapper[28504]: I0318 13:38:49.235486 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-95sj7/must-gather-htdcq"] Mar 18 13:38:49.304157 master-0 kubenswrapper[28504]: I0318 13:38:49.304125 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-95sj7/must-gather-mvhj2"] Mar 18 13:38:49.663981 master-0 kubenswrapper[28504]: I0318 13:38:49.663774 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-95sj7/must-gather-mvhj2" event={"ID":"6074de49-4e18-42db-8345-f8d9af014737","Type":"ContainerStarted","Data":"2fca1d420b35dcd0010e1db5e8c5de9e18c8800c541c31f334b04b591286069c"} Mar 18 13:38:49.665323 master-0 kubenswrapper[28504]: I0318 13:38:49.665283 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-95sj7/must-gather-htdcq" event={"ID":"6ce65e0f-bf0c-44d6-bb96-c2f859338310","Type":"ContainerStarted","Data":"e2c54dfffde6f57704bd81d014f7673a36dc5d3b8f950eafe5215de21c2b2c9e"} Mar 18 13:38:51.690380 master-0 kubenswrapper[28504]: I0318 13:38:51.689890 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-95sj7/must-gather-mvhj2" event={"ID":"6074de49-4e18-42db-8345-f8d9af014737","Type":"ContainerStarted","Data":"e80c4a798863560196c169a6ecc0c87282055422fb3ac968fb85bfe306afa41d"} Mar 18 13:38:52.716342 master-0 kubenswrapper[28504]: I0318 13:38:52.716276 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-95sj7/must-gather-mvhj2" event={"ID":"6074de49-4e18-42db-8345-f8d9af014737","Type":"ContainerStarted","Data":"44256e3c25b92a55bd1737a57656045c482c5ad8ebda5ca757dbb78ec0c405bc"} Mar 18 13:38:52.745678 master-0 kubenswrapper[28504]: I0318 13:38:52.745588 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-95sj7/must-gather-mvhj2" podStartSLOduration=2.917151686 podStartE2EDuration="4.745569355s" podCreationTimestamp="2026-03-18 13:38:48 +0000 UTC" firstStartedPulling="2026-03-18 13:38:49.302742051 +0000 UTC m=+906.797547826" lastFinishedPulling="2026-03-18 13:38:51.13115972 +0000 UTC m=+908.625965495" observedRunningTime="2026-03-18 13:38:52.741421936 +0000 UTC m=+910.236227711" watchObservedRunningTime="2026-03-18 13:38:52.745569355 +0000 UTC m=+910.240375150" Mar 18 13:38:53.199736 master-0 kubenswrapper[28504]: I0318 13:38:53.193579 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-7d58488df-2bmkn_e4d0b174-33e4-46ee-863b-b5cc2a271b85/cluster-version-operator/0.log" Mar 18 13:38:53.926592 master-0 kubenswrapper[28504]: I0318 13:38:53.926468 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-7d58488df-2bmkn_e4d0b174-33e4-46ee-863b-b5cc2a271b85/cluster-version-operator/1.log" Mar 18 13:38:57.740965 master-0 kubenswrapper[28504]: I0318 13:38:57.726271 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcdctl/0.log" Mar 18 13:38:57.808780 master-0 kubenswrapper[28504]: I0318 13:38:57.808718 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-8v548_8733982a-3ee0-4a7d-b811-9e79ce602150/controller/0.log" Mar 18 13:38:57.824444 master-0 kubenswrapper[28504]: I0318 13:38:57.824389 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-8v548_8733982a-3ee0-4a7d-b811-9e79ce602150/kube-rbac-proxy/0.log" Mar 18 13:38:57.894482 master-0 kubenswrapper[28504]: I0318 13:38:57.892519 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/controller/0.log" Mar 18 13:38:57.966821 master-0 kubenswrapper[28504]: I0318 13:38:57.966782 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-qsfzf_79aaf490-69d3-404d-9c69-e062717930a0/nmstate-console-plugin/0.log" Mar 18 13:38:58.012031 master-0 kubenswrapper[28504]: I0318 13:38:58.011639 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-pl77f_7da48df5-5ace-4bcb-a96f-a96bea9b7657/nmstate-handler/0.log" Mar 18 13:38:58.027439 master-0 kubenswrapper[28504]: I0318 13:38:58.024823 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/frr/0.log" Mar 18 13:38:58.040452 master-0 kubenswrapper[28504]: I0318 13:38:58.040413 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-m55sr_3cdd455d-0f9a-4c4c-99d3-231f0dd90d04/nmstate-metrics/0.log" Mar 18 13:38:58.045962 master-0 kubenswrapper[28504]: I0318 13:38:58.043243 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/reloader/0.log" Mar 18 13:38:58.067194 master-0 kubenswrapper[28504]: I0318 13:38:58.067157 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/frr-metrics/0.log" Mar 18 13:38:58.069607 master-0 kubenswrapper[28504]: I0318 13:38:58.069562 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-m55sr_3cdd455d-0f9a-4c4c-99d3-231f0dd90d04/kube-rbac-proxy/0.log" Mar 18 13:38:58.095227 master-0 kubenswrapper[28504]: I0318 13:38:58.092778 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/kube-rbac-proxy/0.log" Mar 18 13:38:58.095227 master-0 kubenswrapper[28504]: I0318 13:38:58.094761 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-kx4rd_9b690cda-08f0-4606-a18f-a1be217b5037/nmstate-operator/0.log" Mar 18 13:38:58.116592 master-0 kubenswrapper[28504]: I0318 13:38:58.116532 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/kube-rbac-proxy-frr/0.log" Mar 18 13:38:58.118361 master-0 kubenswrapper[28504]: I0318 13:38:58.118267 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-5snlm_ce064a48-4cf9-4160-82f0-307c9a64733b/nmstate-webhook/0.log" Mar 18 13:38:58.123419 master-0 kubenswrapper[28504]: I0318 13:38:58.123344 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd/0.log" Mar 18 13:38:58.125920 master-0 kubenswrapper[28504]: I0318 13:38:58.125835 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/cp-frr-files/0.log" Mar 18 13:38:58.158264 master-0 kubenswrapper[28504]: I0318 13:38:58.158223 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/cp-reloader/0.log" Mar 18 13:38:58.168346 master-0 kubenswrapper[28504]: I0318 13:38:58.168302 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-metrics/0.log" Mar 18 13:38:58.169267 master-0 kubenswrapper[28504]: I0318 13:38:58.169082 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/cp-metrics/0.log" Mar 18 13:38:58.185372 master-0 kubenswrapper[28504]: I0318 13:38:58.185335 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-readyz/0.log" Mar 18 13:38:58.186026 master-0 kubenswrapper[28504]: I0318 13:38:58.186003 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-bf9qb_6fd63f11-ffae-4160-9728-05059c09ef4d/frr-k8s-webhook-server/0.log" Mar 18 13:38:58.219286 master-0 kubenswrapper[28504]: I0318 13:38:58.219242 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-c6675654-f8zcx_4f488544-10c9-4e31-b183-60eb24cd6593/manager/0.log" Mar 18 13:38:58.237493 master-0 kubenswrapper[28504]: I0318 13:38:58.236497 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-rev/0.log" Mar 18 13:38:58.244209 master-0 kubenswrapper[28504]: I0318 13:38:58.238619 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-67655b5bb9-s6lrj_133f045a-3c88-4373-84e8-55217f947865/webhook-server/0.log" Mar 18 13:38:58.257746 master-0 kubenswrapper[28504]: I0318 13:38:58.257116 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/setup/0.log" Mar 18 13:38:58.292288 master-0 kubenswrapper[28504]: I0318 13:38:58.292192 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-ensure-env-vars/0.log" Mar 18 13:38:58.335336 master-0 kubenswrapper[28504]: I0318 13:38:58.335281 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-resources-copy/0.log" Mar 18 13:38:58.351756 master-0 kubenswrapper[28504]: I0318 13:38:58.348107 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbx7x_4998b602-5dd9-4ce5-90ff-85e81b4d51fe/speaker/0.log" Mar 18 13:38:58.363109 master-0 kubenswrapper[28504]: I0318 13:38:58.362871 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbx7x_4998b602-5dd9-4ce5-90ff-85e81b4d51fe/kube-rbac-proxy/0.log" Mar 18 13:38:58.403354 master-0 kubenswrapper[28504]: I0318 13:38:58.403301 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_f32b4d4d-df54-4fa7-a940-297e064fea44/installer/0.log" Mar 18 13:38:58.478967 master-0 kubenswrapper[28504]: I0318 13:38:58.477057 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_5879ced8-4ac1-40e3-bf93-38b8a7497823/installer/0.log" Mar 18 13:38:59.771602 master-0 kubenswrapper[28504]: I0318 13:38:59.771454 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-m2vzq_c0403564-f8d9-4d81-b9e3-d9028fe58590/assisted-installer-controller/0.log" Mar 18 13:39:01.629725 master-0 kubenswrapper[28504]: I0318 13:39:01.629575 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-67d599f9d6-s5drj_1951681b-a335-4cae-8006-202d4cdb5b96/oauth-openshift/0.log" Mar 18 13:39:03.060894 master-0 kubenswrapper[28504]: I0318 13:39:03.060809 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-mqh5c_8ce8e99d-7b02-4bf4-a438-adde851918cb/authentication-operator/0.log" Mar 18 13:39:03.097775 master-0 kubenswrapper[28504]: I0318 13:39:03.097710 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-mqh5c_8ce8e99d-7b02-4bf4-a438-adde851918cb/authentication-operator/1.log" Mar 18 13:39:03.815998 master-0 kubenswrapper[28504]: I0318 13:39:03.815876 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-mtnzv_ab9ef7c0-f9f2-4048-9857-06ab48f36ecf/router/4.log" Mar 18 13:39:03.821469 master-0 kubenswrapper[28504]: I0318 13:39:03.821408 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-mtnzv_ab9ef7c0-f9f2-4048-9857-06ab48f36ecf/router/3.log" Mar 18 13:39:04.453094 master-0 kubenswrapper[28504]: I0318 13:39:04.452270 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-7d95bbc4f4-4ch22_9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a/oauth-apiserver/0.log" Mar 18 13:39:04.475120 master-0 kubenswrapper[28504]: I0318 13:39:04.475073 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-7d95bbc4f4-4ch22_9a1019b1-2b2d-4d63-bd2b-8c45bb85c90a/fix-audit-permissions/0.log" Mar 18 13:39:04.565031 master-0 kubenswrapper[28504]: I0318 13:39:04.564986 28504 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh"] Mar 18 13:39:04.566638 master-0 kubenswrapper[28504]: I0318 13:39:04.566602 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:04.612434 master-0 kubenswrapper[28504]: I0318 13:39:04.612342 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh"] Mar 18 13:39:04.727769 master-0 kubenswrapper[28504]: I0318 13:39:04.727706 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/65735be4-7790-4e74-a663-a13b065dffa2-podres\") pod \"perf-node-gather-daemonset-2zdkh\" (UID: \"65735be4-7790-4e74-a663-a13b065dffa2\") " pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:04.727769 master-0 kubenswrapper[28504]: I0318 13:39:04.727770 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxz4s\" (UniqueName: \"kubernetes.io/projected/65735be4-7790-4e74-a663-a13b065dffa2-kube-api-access-hxz4s\") pod \"perf-node-gather-daemonset-2zdkh\" (UID: \"65735be4-7790-4e74-a663-a13b065dffa2\") " pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:04.728162 master-0 kubenswrapper[28504]: I0318 13:39:04.727966 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65735be4-7790-4e74-a663-a13b065dffa2-lib-modules\") pod \"perf-node-gather-daemonset-2zdkh\" (UID: \"65735be4-7790-4e74-a663-a13b065dffa2\") " pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:04.728243 master-0 kubenswrapper[28504]: I0318 13:39:04.728214 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/65735be4-7790-4e74-a663-a13b065dffa2-proc\") pod \"perf-node-gather-daemonset-2zdkh\" (UID: \"65735be4-7790-4e74-a663-a13b065dffa2\") " pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:04.728307 master-0 kubenswrapper[28504]: I0318 13:39:04.728269 28504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/65735be4-7790-4e74-a663-a13b065dffa2-sys\") pod \"perf-node-gather-daemonset-2zdkh\" (UID: \"65735be4-7790-4e74-a663-a13b065dffa2\") " pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:04.830385 master-0 kubenswrapper[28504]: I0318 13:39:04.830308 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/65735be4-7790-4e74-a663-a13b065dffa2-proc\") pod \"perf-node-gather-daemonset-2zdkh\" (UID: \"65735be4-7790-4e74-a663-a13b065dffa2\") " pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:04.830629 master-0 kubenswrapper[28504]: I0318 13:39:04.830453 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/65735be4-7790-4e74-a663-a13b065dffa2-sys\") pod \"perf-node-gather-daemonset-2zdkh\" (UID: \"65735be4-7790-4e74-a663-a13b065dffa2\") " pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:04.830629 master-0 kubenswrapper[28504]: I0318 13:39:04.830556 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/65735be4-7790-4e74-a663-a13b065dffa2-sys\") pod \"perf-node-gather-daemonset-2zdkh\" (UID: \"65735be4-7790-4e74-a663-a13b065dffa2\") " pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:04.830723 master-0 kubenswrapper[28504]: I0318 13:39:04.830660 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/65735be4-7790-4e74-a663-a13b065dffa2-podres\") pod \"perf-node-gather-daemonset-2zdkh\" (UID: \"65735be4-7790-4e74-a663-a13b065dffa2\") " pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:04.830767 master-0 kubenswrapper[28504]: I0318 13:39:04.830741 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxz4s\" (UniqueName: \"kubernetes.io/projected/65735be4-7790-4e74-a663-a13b065dffa2-kube-api-access-hxz4s\") pod \"perf-node-gather-daemonset-2zdkh\" (UID: \"65735be4-7790-4e74-a663-a13b065dffa2\") " pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:04.830906 master-0 kubenswrapper[28504]: I0318 13:39:04.830879 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/65735be4-7790-4e74-a663-a13b065dffa2-podres\") pod \"perf-node-gather-daemonset-2zdkh\" (UID: \"65735be4-7790-4e74-a663-a13b065dffa2\") " pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:04.831037 master-0 kubenswrapper[28504]: I0318 13:39:04.831006 28504 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65735be4-7790-4e74-a663-a13b065dffa2-lib-modules\") pod \"perf-node-gather-daemonset-2zdkh\" (UID: \"65735be4-7790-4e74-a663-a13b065dffa2\") " pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:04.831135 master-0 kubenswrapper[28504]: I0318 13:39:04.831110 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65735be4-7790-4e74-a663-a13b065dffa2-lib-modules\") pod \"perf-node-gather-daemonset-2zdkh\" (UID: \"65735be4-7790-4e74-a663-a13b065dffa2\") " pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:04.831310 master-0 kubenswrapper[28504]: I0318 13:39:04.831276 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/65735be4-7790-4e74-a663-a13b065dffa2-proc\") pod \"perf-node-gather-daemonset-2zdkh\" (UID: \"65735be4-7790-4e74-a663-a13b065dffa2\") " pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:04.850068 master-0 kubenswrapper[28504]: I0318 13:39:04.850004 28504 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxz4s\" (UniqueName: \"kubernetes.io/projected/65735be4-7790-4e74-a663-a13b065dffa2-kube-api-access-hxz4s\") pod \"perf-node-gather-daemonset-2zdkh\" (UID: \"65735be4-7790-4e74-a663-a13b065dffa2\") " pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:04.917508 master-0 kubenswrapper[28504]: I0318 13:39:04.917449 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-95sj7/must-gather-htdcq" event={"ID":"6ce65e0f-bf0c-44d6-bb96-c2f859338310","Type":"ContainerStarted","Data":"1b5dbe2a66d8803a0a9efa55c8115f70f2edc95ba2b38d8385a903ff96a52cc4"} Mar 18 13:39:04.917508 master-0 kubenswrapper[28504]: I0318 13:39:04.917507 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-95sj7/must-gather-htdcq" event={"ID":"6ce65e0f-bf0c-44d6-bb96-c2f859338310","Type":"ContainerStarted","Data":"03ab23bb5ff6d6554d2b02141823b64126b0a8152a7b552bc89d13c6f65c12e9"} Mar 18 13:39:04.933545 master-0 kubenswrapper[28504]: I0318 13:39:04.933486 28504 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:04.947190 master-0 kubenswrapper[28504]: I0318 13:39:04.947096 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-95sj7/must-gather-htdcq" podStartSLOduration=2.158390559 podStartE2EDuration="16.947076328s" podCreationTimestamp="2026-03-18 13:38:48 +0000 UTC" firstStartedPulling="2026-03-18 13:38:49.230914063 +0000 UTC m=+906.725719838" lastFinishedPulling="2026-03-18 13:39:04.019599832 +0000 UTC m=+921.514405607" observedRunningTime="2026-03-18 13:39:04.938655065 +0000 UTC m=+922.433460840" watchObservedRunningTime="2026-03-18 13:39:04.947076328 +0000 UTC m=+922.441882103" Mar 18 13:39:05.113215 master-0 kubenswrapper[28504]: I0318 13:39:05.111469 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-q8vxr_bd033b5b-af07-4e69-9a5c-46f7c9bde95a/kube-rbac-proxy/0.log" Mar 18 13:39:05.158919 master-0 kubenswrapper[28504]: I0318 13:39:05.158846 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-q8vxr_bd033b5b-af07-4e69-9a5c-46f7c9bde95a/cluster-autoscaler-operator/1.log" Mar 18 13:39:05.160012 master-0 kubenswrapper[28504]: I0318 13:39:05.159922 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-q8vxr_bd033b5b-af07-4e69-9a5c-46f7c9bde95a/cluster-autoscaler-operator/0.log" Mar 18 13:39:05.182656 master-0 kubenswrapper[28504]: I0318 13:39:05.182501 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-7w5g8_a01c92f5-7938-437d-8262-11598bd8023c/cluster-baremetal-operator/2.log" Mar 18 13:39:05.182656 master-0 kubenswrapper[28504]: I0318 13:39:05.182568 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-7w5g8_a01c92f5-7938-437d-8262-11598bd8023c/cluster-baremetal-operator/3.log" Mar 18 13:39:05.199379 master-0 kubenswrapper[28504]: I0318 13:39:05.199277 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-7w5g8_a01c92f5-7938-437d-8262-11598bd8023c/baremetal-kube-rbac-proxy/0.log" Mar 18 13:39:05.223682 master-0 kubenswrapper[28504]: I0318 13:39:05.223631 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-bjpp5_933a37fd-d76a-4f60-8dd8-301fb73daf42/control-plane-machine-set-operator/1.log" Mar 18 13:39:05.225342 master-0 kubenswrapper[28504]: I0318 13:39:05.225303 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-bjpp5_933a37fd-d76a-4f60-8dd8-301fb73daf42/control-plane-machine-set-operator/0.log" Mar 18 13:39:05.245593 master-0 kubenswrapper[28504]: I0318 13:39:05.245425 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-nf22v_d2e2ef3a-a6e9-44dc-93c7-9f533e75502a/kube-rbac-proxy/0.log" Mar 18 13:39:05.267528 master-0 kubenswrapper[28504]: I0318 13:39:05.266866 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-nf22v_d2e2ef3a-a6e9-44dc-93c7-9f533e75502a/machine-api-operator/0.log" Mar 18 13:39:05.269748 master-0 kubenswrapper[28504]: I0318 13:39:05.269718 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-nf22v_d2e2ef3a-a6e9-44dc-93c7-9f533e75502a/machine-api-operator/1.log" Mar 18 13:39:05.439696 master-0 kubenswrapper[28504]: I0318 13:39:05.436855 28504 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh"] Mar 18 13:39:05.439696 master-0 kubenswrapper[28504]: W0318 13:39:05.437451 28504 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod65735be4_7790_4e74_a663_a13b065dffa2.slice/crio-75e2b94d047af972dc48297f0129e3bad231632586f3f0dd1f9b59a53e864119 WatchSource:0}: Error finding container 75e2b94d047af972dc48297f0129e3bad231632586f3f0dd1f9b59a53e864119: Status 404 returned error can't find the container with id 75e2b94d047af972dc48297f0129e3bad231632586f3f0dd1f9b59a53e864119 Mar 18 13:39:05.930079 master-0 kubenswrapper[28504]: I0318 13:39:05.929972 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" event={"ID":"65735be4-7790-4e74-a663-a13b065dffa2","Type":"ContainerStarted","Data":"108ba63677d4fabfcf9559fc186284889a931941034452c8443de10101da139e"} Mar 18 13:39:05.930079 master-0 kubenswrapper[28504]: I0318 13:39:05.930084 28504 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" event={"ID":"65735be4-7790-4e74-a663-a13b065dffa2","Type":"ContainerStarted","Data":"75e2b94d047af972dc48297f0129e3bad231632586f3f0dd1f9b59a53e864119"} Mar 18 13:39:05.930790 master-0 kubenswrapper[28504]: I0318 13:39:05.930274 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:06.011869 master-0 kubenswrapper[28504]: I0318 13:39:06.011705 28504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" podStartSLOduration=2.011681451 podStartE2EDuration="2.011681451s" podCreationTimestamp="2026-03-18 13:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:39:06.011057754 +0000 UTC m=+923.505863529" watchObservedRunningTime="2026-03-18 13:39:06.011681451 +0000 UTC m=+923.506487226" Mar 18 13:39:06.632033 master-0 kubenswrapper[28504]: I0318 13:39:06.631352 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-ncjbh_d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/cluster-cloud-controller-manager/1.log" Mar 18 13:39:06.633264 master-0 kubenswrapper[28504]: I0318 13:39:06.633181 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-ncjbh_d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/cluster-cloud-controller-manager/0.log" Mar 18 13:39:06.654022 master-0 kubenswrapper[28504]: I0318 13:39:06.651991 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-ncjbh_d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/config-sync-controllers/1.log" Mar 18 13:39:06.654022 master-0 kubenswrapper[28504]: I0318 13:39:06.652252 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-ncjbh_d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/config-sync-controllers/0.log" Mar 18 13:39:06.669910 master-0 kubenswrapper[28504]: I0318 13:39:06.669837 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-ncjbh_d3f208f9-e2e1-4fae-a47a-f58b722e0ad5/kube-rbac-proxy/0.log" Mar 18 13:39:08.337739 master-0 kubenswrapper[28504]: I0318 13:39:08.337686 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-9nw6w_7fa6920b-f7d9-4758-bba9-356a2c8b1b67/kube-rbac-proxy/0.log" Mar 18 13:39:08.368671 master-0 kubenswrapper[28504]: I0318 13:39:08.368618 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-9nw6w_7fa6920b-f7d9-4758-bba9-356a2c8b1b67/cloud-credential-operator/0.log" Mar 18 13:39:09.694775 master-0 kubenswrapper[28504]: I0318 13:39:09.694726 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-c7nh9_0213214b-693b-411b-8254-48d7826011eb/openshift-config-operator/1.log" Mar 18 13:39:09.697993 master-0 kubenswrapper[28504]: I0318 13:39:09.697949 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-c7nh9_0213214b-693b-411b-8254-48d7826011eb/openshift-config-operator/2.log" Mar 18 13:39:09.709520 master-0 kubenswrapper[28504]: I0318 13:39:09.709485 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-c7nh9_0213214b-693b-411b-8254-48d7826011eb/openshift-api/0.log" Mar 18 13:39:10.332914 master-0 kubenswrapper[28504]: I0318 13:39:10.332259 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-8v548_8733982a-3ee0-4a7d-b811-9e79ce602150/controller/0.log" Mar 18 13:39:10.339180 master-0 kubenswrapper[28504]: I0318 13:39:10.338992 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-8v548_8733982a-3ee0-4a7d-b811-9e79ce602150/kube-rbac-proxy/0.log" Mar 18 13:39:10.367327 master-0 kubenswrapper[28504]: I0318 13:39:10.367266 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/controller/0.log" Mar 18 13:39:10.427743 master-0 kubenswrapper[28504]: I0318 13:39:10.427680 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/frr/0.log" Mar 18 13:39:10.430727 master-0 kubenswrapper[28504]: I0318 13:39:10.430696 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-74d52_8ca88a33-ec5e-415c-b976-cfb6ddfe7da4/console-operator/0.log" Mar 18 13:39:10.435733 master-0 kubenswrapper[28504]: I0318 13:39:10.435682 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/reloader/0.log" Mar 18 13:39:10.444029 master-0 kubenswrapper[28504]: I0318 13:39:10.442724 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/frr-metrics/0.log" Mar 18 13:39:10.450120 master-0 kubenswrapper[28504]: I0318 13:39:10.450083 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/kube-rbac-proxy/0.log" Mar 18 13:39:10.462755 master-0 kubenswrapper[28504]: I0318 13:39:10.462703 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/kube-rbac-proxy-frr/0.log" Mar 18 13:39:10.469826 master-0 kubenswrapper[28504]: I0318 13:39:10.469776 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/cp-frr-files/0.log" Mar 18 13:39:10.477816 master-0 kubenswrapper[28504]: I0318 13:39:10.477778 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/cp-reloader/0.log" Mar 18 13:39:10.488567 master-0 kubenswrapper[28504]: I0318 13:39:10.488510 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/cp-metrics/0.log" Mar 18 13:39:10.501896 master-0 kubenswrapper[28504]: I0318 13:39:10.501848 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-bf9qb_6fd63f11-ffae-4160-9728-05059c09ef4d/frr-k8s-webhook-server/0.log" Mar 18 13:39:10.531675 master-0 kubenswrapper[28504]: I0318 13:39:10.531629 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-c6675654-f8zcx_4f488544-10c9-4e31-b183-60eb24cd6593/manager/0.log" Mar 18 13:39:10.544160 master-0 kubenswrapper[28504]: I0318 13:39:10.544094 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-67655b5bb9-s6lrj_133f045a-3c88-4373-84e8-55217f947865/webhook-server/0.log" Mar 18 13:39:10.622566 master-0 kubenswrapper[28504]: I0318 13:39:10.622457 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbx7x_4998b602-5dd9-4ce5-90ff-85e81b4d51fe/speaker/0.log" Mar 18 13:39:10.630693 master-0 kubenswrapper[28504]: I0318 13:39:10.630634 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbx7x_4998b602-5dd9-4ce5-90ff-85e81b4d51fe/kube-rbac-proxy/0.log" Mar 18 13:39:11.037721 master-0 kubenswrapper[28504]: I0318 13:39:11.037655 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-65687d4794-jn97k_2eae47da-e3b1-4825-bf96-a9357a912731/console/0.log" Mar 18 13:39:11.062713 master-0 kubenswrapper[28504]: I0318 13:39:11.062665 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-66b8ffb895-crvh7_2cf62b58-2c1c-4187-8fca-1a60b51a1783/download-server/0.log" Mar 18 13:39:11.928414 master-0 kubenswrapper[28504]: I0318 13:39:11.928338 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-7d87854d6-92zqc_7a951627-c032-4846-821c-c4bcbf4a91b9/cluster-storage-operator/0.log" Mar 18 13:39:11.945640 master-0 kubenswrapper[28504]: I0318 13:39:11.945582 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-wkw7f_1ad93612-ab12-4b30-984f-119e1b924a84/snapshot-controller/3.log" Mar 18 13:39:11.946531 master-0 kubenswrapper[28504]: I0318 13:39:11.946500 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-wkw7f_1ad93612-ab12-4b30-984f-119e1b924a84/snapshot-controller/4.log" Mar 18 13:39:11.971884 master-0 kubenswrapper[28504]: I0318 13:39:11.971836 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5f5d689c6b-4s6b8_5bccf60c-5b07-4f40-8430-12bfb62661c7/csi-snapshot-controller-operator/1.log" Mar 18 13:39:11.971884 master-0 kubenswrapper[28504]: I0318 13:39:11.971874 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5f5d689c6b-4s6b8_5bccf60c-5b07-4f40-8430-12bfb62661c7/csi-snapshot-controller-operator/0.log" Mar 18 13:39:12.518808 master-0 kubenswrapper[28504]: I0318 13:39:12.518688 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-bqbzx_da6a763d-2777-40c4-ae1f-c77ced406ea2/dns-operator/0.log" Mar 18 13:39:12.537421 master-0 kubenswrapper[28504]: I0318 13:39:12.537369 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-bqbzx_da6a763d-2777-40c4-ae1f-c77ced406ea2/kube-rbac-proxy/0.log" Mar 18 13:39:12.942201 master-0 kubenswrapper[28504]: I0318 13:39:12.942127 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-sp2k4_5a45c961-12d7-456a-b926-6277cbcdcc1d/registry-server/0.log" Mar 18 13:39:13.178340 master-0 kubenswrapper[28504]: I0318 13:39:13.178118 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-wl929_4671673d-afa0-481f-b3a2-2c2b9441b6ce/dns/0.log" Mar 18 13:39:13.200906 master-0 kubenswrapper[28504]: I0318 13:39:13.200786 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-wl929_4671673d-afa0-481f-b3a2-2c2b9441b6ce/kube-rbac-proxy/0.log" Mar 18 13:39:13.221551 master-0 kubenswrapper[28504]: I0318 13:39:13.221515 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-slqms_8f59a12b-d690-44c5-972c-fb4b0b5819f1/dns-node-resolver/0.log" Mar 18 13:39:13.839923 master-0 kubenswrapper[28504]: I0318 13:39:13.839845 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-hmbpl_1bf0ea4e-8b08-488f-b252-39580f46b756/etcd-operator/1.log" Mar 18 13:39:13.843598 master-0 kubenswrapper[28504]: I0318 13:39:13.843513 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-hmbpl_1bf0ea4e-8b08-488f-b252-39580f46b756/etcd-operator/2.log" Mar 18 13:39:14.364886 master-0 kubenswrapper[28504]: I0318 13:39:14.364805 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcdctl/0.log" Mar 18 13:39:14.713778 master-0 kubenswrapper[28504]: I0318 13:39:14.713711 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd/0.log" Mar 18 13:39:14.738419 master-0 kubenswrapper[28504]: I0318 13:39:14.738341 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-metrics/0.log" Mar 18 13:39:14.754255 master-0 kubenswrapper[28504]: I0318 13:39:14.754154 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-readyz/0.log" Mar 18 13:39:14.769613 master-0 kubenswrapper[28504]: I0318 13:39:14.769516 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-rev/0.log" Mar 18 13:39:14.792708 master-0 kubenswrapper[28504]: I0318 13:39:14.792150 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/setup/0.log" Mar 18 13:39:14.815416 master-0 kubenswrapper[28504]: I0318 13:39:14.815372 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-ensure-env-vars/0.log" Mar 18 13:39:14.831371 master-0 kubenswrapper[28504]: I0318 13:39:14.831330 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-resources-copy/0.log" Mar 18 13:39:14.887155 master-0 kubenswrapper[28504]: I0318 13:39:14.887091 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_f32b4d4d-df54-4fa7-a940-297e064fea44/installer/0.log" Mar 18 13:39:14.929957 master-0 kubenswrapper[28504]: I0318 13:39:14.929851 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_5879ced8-4ac1-40e3-bf93-38b8a7497823/installer/0.log" Mar 18 13:39:14.963117 master-0 kubenswrapper[28504]: I0318 13:39:14.963057 28504 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-95sj7/perf-node-gather-daemonset-2zdkh" Mar 18 13:39:15.586607 master-0 kubenswrapper[28504]: I0318 13:39:15.586558 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-5549dc66cb-n995f_73c93ee3-cf14-4fea-b2a7-ccfb56e55be4/cluster-image-registry-operator/0.log" Mar 18 13:39:15.589635 master-0 kubenswrapper[28504]: I0318 13:39:15.589603 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-5549dc66cb-n995f_73c93ee3-cf14-4fea-b2a7-ccfb56e55be4/cluster-image-registry-operator/1.log" Mar 18 13:39:15.604079 master-0 kubenswrapper[28504]: I0318 13:39:15.604033 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-nzrmh_ebbaf8e6-9de8-44ce-9f6c-bb4804723598/node-ca/0.log" Mar 18 13:39:16.148120 master-0 kubenswrapper[28504]: I0318 13:39:16.148066 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/6.log" Mar 18 13:39:16.148918 master-0 kubenswrapper[28504]: I0318 13:39:16.148864 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/ingress-operator/5.log" Mar 18 13:39:16.165845 master-0 kubenswrapper[28504]: I0318 13:39:16.165787 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-xwqsb_f2b92a53-0b61-4e1d-a306-f9a498e48b38/kube-rbac-proxy/0.log" Mar 18 13:39:16.871248 master-0 kubenswrapper[28504]: I0318 13:39:16.871198 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-m9wjm_5cffbdee-d63b-457e-8610-e880c787c9b4/serve-healthcheck-canary/0.log" Mar 18 13:39:17.380842 master-0 kubenswrapper[28504]: I0318 13:39:17.380696 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-68bf6ff9d6-ckwz8_c074751c-6b3c-44df-aca5-42fa69662779/insights-operator/0.log" Mar 18 13:39:19.020633 master-0 kubenswrapper[28504]: I0318 13:39:19.020577 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_dad1d337-f09f-4479-831b-d1e02f38148f/alertmanager/0.log" Mar 18 13:39:19.036443 master-0 kubenswrapper[28504]: I0318 13:39:19.036352 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_dad1d337-f09f-4479-831b-d1e02f38148f/config-reloader/0.log" Mar 18 13:39:19.049536 master-0 kubenswrapper[28504]: I0318 13:39:19.049477 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_dad1d337-f09f-4479-831b-d1e02f38148f/kube-rbac-proxy-web/0.log" Mar 18 13:39:19.083501 master-0 kubenswrapper[28504]: I0318 13:39:19.083447 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_dad1d337-f09f-4479-831b-d1e02f38148f/kube-rbac-proxy/0.log" Mar 18 13:39:19.098566 master-0 kubenswrapper[28504]: I0318 13:39:19.098502 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_dad1d337-f09f-4479-831b-d1e02f38148f/kube-rbac-proxy-metric/0.log" Mar 18 13:39:19.124420 master-0 kubenswrapper[28504]: I0318 13:39:19.124317 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_dad1d337-f09f-4479-831b-d1e02f38148f/prom-label-proxy/0.log" Mar 18 13:39:19.140174 master-0 kubenswrapper[28504]: I0318 13:39:19.140137 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_dad1d337-f09f-4479-831b-d1e02f38148f/init-config-reloader/0.log" Mar 18 13:39:19.181695 master-0 kubenswrapper[28504]: I0318 13:39:19.181157 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-58845fbb57-jfdn5_ee1eb80b-5a76-443f-a534-54d5bdc0c98a/cluster-monitoring-operator/0.log" Mar 18 13:39:19.205092 master-0 kubenswrapper[28504]: I0318 13:39:19.205013 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-dldw9_6ed4f640-d481-4e7a-92eb-f0eda82e138c/kube-state-metrics/0.log" Mar 18 13:39:19.218095 master-0 kubenswrapper[28504]: I0318 13:39:19.218016 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-dldw9_6ed4f640-d481-4e7a-92eb-f0eda82e138c/kube-rbac-proxy-main/0.log" Mar 18 13:39:19.232645 master-0 kubenswrapper[28504]: I0318 13:39:19.232539 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-dldw9_6ed4f640-d481-4e7a-92eb-f0eda82e138c/kube-rbac-proxy-self/0.log" Mar 18 13:39:19.257369 master-0 kubenswrapper[28504]: I0318 13:39:19.257312 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-5688f96659-j2jrm_9590a761-5b85-4145-b0f6-4675eba16998/metrics-server/0.log" Mar 18 13:39:19.291663 master-0 kubenswrapper[28504]: I0318 13:39:19.291544 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-574cc54585-w6425_300098ac-781e-48e5-bbab-4c1009ecf6a2/monitoring-plugin/0.log" Mar 18 13:39:19.315656 master-0 kubenswrapper[28504]: I0318 13:39:19.315558 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-f55c6_b856d226-a137-4954-82c5-5929d579387a/node-exporter/0.log" Mar 18 13:39:19.331692 master-0 kubenswrapper[28504]: I0318 13:39:19.331641 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-f55c6_b856d226-a137-4954-82c5-5929d579387a/kube-rbac-proxy/0.log" Mar 18 13:39:19.347848 master-0 kubenswrapper[28504]: I0318 13:39:19.347795 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-f55c6_b856d226-a137-4954-82c5-5929d579387a/init-textfile/0.log" Mar 18 13:39:19.373370 master-0 kubenswrapper[28504]: I0318 13:39:19.373328 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-bshl9_3c0d0048-6d96-459c-8742-2f092af44a6a/kube-rbac-proxy-main/0.log" Mar 18 13:39:19.393841 master-0 kubenswrapper[28504]: I0318 13:39:19.393781 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-bshl9_3c0d0048-6d96-459c-8742-2f092af44a6a/kube-rbac-proxy-self/0.log" Mar 18 13:39:19.419407 master-0 kubenswrapper[28504]: I0318 13:39:19.419347 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-bshl9_3c0d0048-6d96-459c-8742-2f092af44a6a/openshift-state-metrics/0.log" Mar 18 13:39:19.429076 master-0 kubenswrapper[28504]: I0318 13:39:19.429025 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-q8vxr_bd033b5b-af07-4e69-9a5c-46f7c9bde95a/kube-rbac-proxy/0.log" Mar 18 13:39:19.458244 master-0 kubenswrapper[28504]: I0318 13:39:19.458138 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-q8vxr_bd033b5b-af07-4e69-9a5c-46f7c9bde95a/cluster-autoscaler-operator/0.log" Mar 18 13:39:19.459310 master-0 kubenswrapper[28504]: I0318 13:39:19.459242 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2b69e842-e81a-46b7-b61f-5e2dca016a8d/prometheus/0.log" Mar 18 13:39:19.467514 master-0 kubenswrapper[28504]: I0318 13:39:19.467438 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-q8vxr_bd033b5b-af07-4e69-9a5c-46f7c9bde95a/cluster-autoscaler-operator/1.log" Mar 18 13:39:19.477042 master-0 kubenswrapper[28504]: I0318 13:39:19.476979 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-7w5g8_a01c92f5-7938-437d-8262-11598bd8023c/cluster-baremetal-operator/2.log" Mar 18 13:39:19.477042 master-0 kubenswrapper[28504]: I0318 13:39:19.477032 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2b69e842-e81a-46b7-b61f-5e2dca016a8d/config-reloader/0.log" Mar 18 13:39:19.478052 master-0 kubenswrapper[28504]: I0318 13:39:19.478029 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-7w5g8_a01c92f5-7938-437d-8262-11598bd8023c/cluster-baremetal-operator/3.log" Mar 18 13:39:19.490965 master-0 kubenswrapper[28504]: I0318 13:39:19.490551 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-7w5g8_a01c92f5-7938-437d-8262-11598bd8023c/baremetal-kube-rbac-proxy/0.log" Mar 18 13:39:19.504081 master-0 kubenswrapper[28504]: I0318 13:39:19.504031 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2b69e842-e81a-46b7-b61f-5e2dca016a8d/thanos-sidecar/0.log" Mar 18 13:39:19.506180 master-0 kubenswrapper[28504]: I0318 13:39:19.506155 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-bjpp5_933a37fd-d76a-4f60-8dd8-301fb73daf42/control-plane-machine-set-operator/1.log" Mar 18 13:39:19.506313 master-0 kubenswrapper[28504]: I0318 13:39:19.506212 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-bjpp5_933a37fd-d76a-4f60-8dd8-301fb73daf42/control-plane-machine-set-operator/0.log" Mar 18 13:39:19.520573 master-0 kubenswrapper[28504]: I0318 13:39:19.520511 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-nf22v_d2e2ef3a-a6e9-44dc-93c7-9f533e75502a/kube-rbac-proxy/0.log" Mar 18 13:39:19.527047 master-0 kubenswrapper[28504]: I0318 13:39:19.526912 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2b69e842-e81a-46b7-b61f-5e2dca016a8d/kube-rbac-proxy-web/0.log" Mar 18 13:39:19.538050 master-0 kubenswrapper[28504]: I0318 13:39:19.538000 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-nf22v_d2e2ef3a-a6e9-44dc-93c7-9f533e75502a/machine-api-operator/0.log" Mar 18 13:39:19.539472 master-0 kubenswrapper[28504]: I0318 13:39:19.539419 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-nf22v_d2e2ef3a-a6e9-44dc-93c7-9f533e75502a/machine-api-operator/1.log" Mar 18 13:39:19.550800 master-0 kubenswrapper[28504]: I0318 13:39:19.550679 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2b69e842-e81a-46b7-b61f-5e2dca016a8d/kube-rbac-proxy/0.log" Mar 18 13:39:19.577304 master-0 kubenswrapper[28504]: I0318 13:39:19.577239 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2b69e842-e81a-46b7-b61f-5e2dca016a8d/kube-rbac-proxy-thanos/0.log" Mar 18 13:39:19.604878 master-0 kubenswrapper[28504]: I0318 13:39:19.604828 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2b69e842-e81a-46b7-b61f-5e2dca016a8d/init-config-reloader/0.log" Mar 18 13:39:19.637046 master-0 kubenswrapper[28504]: I0318 13:39:19.637000 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-6twz2_5a715e53-1874-4993-93d1-504c3470a6f5/prometheus-operator/0.log" Mar 18 13:39:19.653604 master-0 kubenswrapper[28504]: I0318 13:39:19.653549 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-6twz2_5a715e53-1874-4993-93d1-504c3470a6f5/kube-rbac-proxy/0.log" Mar 18 13:39:19.672332 master-0 kubenswrapper[28504]: I0318 13:39:19.672278 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-69c6b55594-8qhwm_92e396cd-a0d9-4b6b-9d82-add1ce2a8712/prometheus-operator-admission-webhook/0.log" Mar 18 13:39:19.703529 master-0 kubenswrapper[28504]: I0318 13:39:19.703465 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5f5f6c46c8-55vzk_0bcf9360-48a8-492e-93c3-ef39ecdaec04/telemeter-client/1.log" Mar 18 13:39:19.704686 master-0 kubenswrapper[28504]: I0318 13:39:19.704632 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5f5f6c46c8-55vzk_0bcf9360-48a8-492e-93c3-ef39ecdaec04/telemeter-client/2.log" Mar 18 13:39:19.717685 master-0 kubenswrapper[28504]: I0318 13:39:19.717639 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5f5f6c46c8-55vzk_0bcf9360-48a8-492e-93c3-ef39ecdaec04/reload/0.log" Mar 18 13:39:19.734585 master-0 kubenswrapper[28504]: I0318 13:39:19.734528 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5f5f6c46c8-55vzk_0bcf9360-48a8-492e-93c3-ef39ecdaec04/kube-rbac-proxy/0.log" Mar 18 13:39:19.758210 master-0 kubenswrapper[28504]: I0318 13:39:19.758153 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-568b89d8b8-tppnt_d477ff80-0635-4f8e-acea-ec2fc42d5c9a/thanos-query/0.log" Mar 18 13:39:19.772566 master-0 kubenswrapper[28504]: I0318 13:39:19.772509 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-568b89d8b8-tppnt_d477ff80-0635-4f8e-acea-ec2fc42d5c9a/kube-rbac-proxy-web/0.log" Mar 18 13:39:19.803754 master-0 kubenswrapper[28504]: I0318 13:39:19.803648 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-568b89d8b8-tppnt_d477ff80-0635-4f8e-acea-ec2fc42d5c9a/kube-rbac-proxy/0.log" Mar 18 13:39:19.821989 master-0 kubenswrapper[28504]: I0318 13:39:19.821917 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-568b89d8b8-tppnt_d477ff80-0635-4f8e-acea-ec2fc42d5c9a/prom-label-proxy/0.log" Mar 18 13:39:20.060306 master-0 kubenswrapper[28504]: I0318 13:39:20.060177 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-568b89d8b8-tppnt_d477ff80-0635-4f8e-acea-ec2fc42d5c9a/kube-rbac-proxy-rules/0.log" Mar 18 13:39:20.105445 master-0 kubenswrapper[28504]: I0318 13:39:20.105390 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-568b89d8b8-tppnt_d477ff80-0635-4f8e-acea-ec2fc42d5c9a/kube-rbac-proxy-metrics/0.log" Mar 18 13:39:21.879072 master-0 kubenswrapper[28504]: I0318 13:39:21.879007 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-8v548_8733982a-3ee0-4a7d-b811-9e79ce602150/controller/0.log" Mar 18 13:39:21.896764 master-0 kubenswrapper[28504]: I0318 13:39:21.896713 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-8v548_8733982a-3ee0-4a7d-b811-9e79ce602150/kube-rbac-proxy/0.log" Mar 18 13:39:21.921741 master-0 kubenswrapper[28504]: I0318 13:39:21.921695 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/controller/0.log" Mar 18 13:39:21.982666 master-0 kubenswrapper[28504]: I0318 13:39:21.982554 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/frr/0.log" Mar 18 13:39:22.000386 master-0 kubenswrapper[28504]: I0318 13:39:22.000324 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/reloader/0.log" Mar 18 13:39:22.015338 master-0 kubenswrapper[28504]: I0318 13:39:22.015295 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/frr-metrics/0.log" Mar 18 13:39:22.037020 master-0 kubenswrapper[28504]: I0318 13:39:22.036984 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/kube-rbac-proxy/0.log" Mar 18 13:39:22.059250 master-0 kubenswrapper[28504]: I0318 13:39:22.059204 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/kube-rbac-proxy-frr/0.log" Mar 18 13:39:22.074057 master-0 kubenswrapper[28504]: I0318 13:39:22.073959 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/cp-frr-files/0.log" Mar 18 13:39:22.090307 master-0 kubenswrapper[28504]: I0318 13:39:22.090250 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/cp-reloader/0.log" Mar 18 13:39:22.107797 master-0 kubenswrapper[28504]: I0318 13:39:22.107743 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n54zk_eb89e2c4-b8c0-45ff-aa69-eaceb8838561/cp-metrics/0.log" Mar 18 13:39:22.129251 master-0 kubenswrapper[28504]: I0318 13:39:22.129107 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-bf9qb_6fd63f11-ffae-4160-9728-05059c09ef4d/frr-k8s-webhook-server/0.log" Mar 18 13:39:22.170095 master-0 kubenswrapper[28504]: I0318 13:39:22.170027 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-c6675654-f8zcx_4f488544-10c9-4e31-b183-60eb24cd6593/manager/0.log" Mar 18 13:39:22.200341 master-0 kubenswrapper[28504]: I0318 13:39:22.200279 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-67655b5bb9-s6lrj_133f045a-3c88-4373-84e8-55217f947865/webhook-server/0.log" Mar 18 13:39:22.292416 master-0 kubenswrapper[28504]: I0318 13:39:22.292343 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbx7x_4998b602-5dd9-4ce5-90ff-85e81b4d51fe/speaker/0.log" Mar 18 13:39:22.306609 master-0 kubenswrapper[28504]: I0318 13:39:22.306543 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbx7x_4998b602-5dd9-4ce5-90ff-85e81b4d51fe/kube-rbac-proxy/0.log" Mar 18 13:39:23.622777 master-0 kubenswrapper[28504]: I0318 13:39:23.622727 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-p6tvz_369e9689-e2f6-4276-b096-8db094f8d6ae/cluster-node-tuning-operator/0.log" Mar 18 13:39:23.623416 master-0 kubenswrapper[28504]: I0318 13:39:23.623382 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-p6tvz_369e9689-e2f6-4276-b096-8db094f8d6ae/cluster-node-tuning-operator/1.log" Mar 18 13:39:23.645233 master-0 kubenswrapper[28504]: I0318 13:39:23.645154 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-rlp78_0f16e797-a619-46a8-948a-9fdfc8a9891f/tuned/0.log" Mar 18 13:39:24.183616 master-0 kubenswrapper[28504]: I0318 13:39:24.183532 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-8ff7d675-4chrl_1ef6fd38-0021-4460-be7d-eb73d64f4d71/prometheus-operator/0.log" Mar 18 13:39:24.211993 master-0 kubenswrapper[28504]: I0318 13:39:24.211912 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-67946df4bf-74qzp_5ae6119a-3e75-4646-8461-44837271a5c4/prometheus-operator-admission-webhook/0.log" Mar 18 13:39:24.232649 master-0 kubenswrapper[28504]: I0318 13:39:24.232590 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-67946df4bf-hv8cx_c3b5d185-c320-460f-8a39-0996af3acc72/prometheus-operator-admission-webhook/0.log" Mar 18 13:39:24.255982 master-0 kubenswrapper[28504]: I0318 13:39:24.254615 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-rxf85_8a411a13-62ee-4723-995c-48b9ddd11c48/operator/0.log" Mar 18 13:39:24.279666 master-0 kubenswrapper[28504]: I0318 13:39:24.279605 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-6f5bc999fb-bzb9c_f30bc0fa-c59c-4581-9176-6777591e1a33/perses-operator/0.log" Mar 18 13:39:25.541088 master-0 kubenswrapper[28504]: I0318 13:39:25.541043 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-hsjwr_b0cb1744-6db9-401b-8d24-a9187582cdf8/cert-manager-controller/0.log" Mar 18 13:39:25.558457 master-0 kubenswrapper[28504]: I0318 13:39:25.558390 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-sb4gj_3ec14c15-e45e-4eb1-b495-31807f7a691e/cert-manager-cainjector/0.log" Mar 18 13:39:25.569851 master-0 kubenswrapper[28504]: I0318 13:39:25.569791 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-lfgvr_8df495a4-1112-45d4-8e9d-fc8b9395c7b6/cert-manager-webhook/0.log" Mar 18 13:39:25.742446 master-0 kubenswrapper[28504]: I0318 13:39:25.742326 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-5zbrg_c2c4572e-0b38-4db1-96e5-6a35e29048e7/kube-apiserver-operator/0.log" Mar 18 13:39:25.744903 master-0 kubenswrapper[28504]: I0318 13:39:25.744868 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-5zbrg_c2c4572e-0b38-4db1-96e5-6a35e29048e7/kube-apiserver-operator/1.log" Mar 18 13:39:26.482143 master-0 kubenswrapper[28504]: I0318 13:39:26.482103 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_88cd8323-8857-41fe-85d4-e6064330ec71/installer/0.log" Mar 18 13:39:26.506480 master-0 kubenswrapper[28504]: I0318 13:39:26.506429 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_810ed1fb-bd32-4e5d-94e6-011f21ff37d3/installer/0.log" Mar 18 13:39:26.530508 master-0 kubenswrapper[28504]: I0318 13:39:26.530017 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-6-master-0_e9e7bf0a-493f-4ebf-8aa1-b51b08e7aeef/installer/0.log" Mar 18 13:39:26.710855 master-0 kubenswrapper[28504]: I0318 13:39:26.710779 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/kube-apiserver/0.log" Mar 18 13:39:26.725432 master-0 kubenswrapper[28504]: I0318 13:39:26.725363 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/kube-apiserver-cert-syncer/0.log" Mar 18 13:39:26.745872 master-0 kubenswrapper[28504]: I0318 13:39:26.745747 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/kube-apiserver-cert-regeneration-controller/0.log" Mar 18 13:39:26.757626 master-0 kubenswrapper[28504]: I0318 13:39:26.757584 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/kube-apiserver-insecure-readyz/0.log" Mar 18 13:39:26.774671 master-0 kubenswrapper[28504]: I0318 13:39:26.774614 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/kube-apiserver-check-endpoints/0.log" Mar 18 13:39:26.790879 master-0 kubenswrapper[28504]: I0318 13:39:26.790832 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/setup/0.log" Mar 18 13:39:27.526837 master-0 kubenswrapper[28504]: I0318 13:39:27.526785 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-8jrfz_234a5a6c-3790-49d0-b1e7-86f81048d96a/kube-rbac-proxy/0.log" Mar 18 13:39:27.552775 master-0 kubenswrapper[28504]: I0318 13:39:27.552690 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-8jrfz_234a5a6c-3790-49d0-b1e7-86f81048d96a/manager/1.log" Mar 18 13:39:27.591966 master-0 kubenswrapper[28504]: I0318 13:39:27.591826 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-8jrfz_234a5a6c-3790-49d0-b1e7-86f81048d96a/manager/0.log" Mar 18 13:39:28.096851 master-0 kubenswrapper[28504]: I0318 13:39:28.096778 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-hsjwr_b0cb1744-6db9-401b-8d24-a9187582cdf8/cert-manager-controller/0.log" Mar 18 13:39:28.122813 master-0 kubenswrapper[28504]: I0318 13:39:28.122762 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-sb4gj_3ec14c15-e45e-4eb1-b495-31807f7a691e/cert-manager-cainjector/0.log" Mar 18 13:39:28.141264 master-0 kubenswrapper[28504]: I0318 13:39:28.141200 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-lfgvr_8df495a4-1112-45d4-8e9d-fc8b9395c7b6/cert-manager-webhook/0.log" Mar 18 13:39:28.632598 master-0 kubenswrapper[28504]: I0318 13:39:28.632540 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-qsfzf_79aaf490-69d3-404d-9c69-e062717930a0/nmstate-console-plugin/0.log" Mar 18 13:39:28.655325 master-0 kubenswrapper[28504]: I0318 13:39:28.655278 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-pl77f_7da48df5-5ace-4bcb-a96f-a96bea9b7657/nmstate-handler/0.log" Mar 18 13:39:28.674395 master-0 kubenswrapper[28504]: I0318 13:39:28.674351 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-m55sr_3cdd455d-0f9a-4c4c-99d3-231f0dd90d04/nmstate-metrics/0.log" Mar 18 13:39:28.697291 master-0 kubenswrapper[28504]: I0318 13:39:28.697250 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-m55sr_3cdd455d-0f9a-4c4c-99d3-231f0dd90d04/kube-rbac-proxy/0.log" Mar 18 13:39:28.720428 master-0 kubenswrapper[28504]: I0318 13:39:28.720365 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-kx4rd_9b690cda-08f0-4606-a18f-a1be217b5037/nmstate-operator/0.log" Mar 18 13:39:28.745247 master-0 kubenswrapper[28504]: I0318 13:39:28.745190 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-5snlm_ce064a48-4cf9-4160-82f0-307c9a64733b/nmstate-webhook/0.log" Mar 18 13:39:29.499371 master-0 kubenswrapper[28504]: I0318 13:39:29.499309 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9bhww_4086d06f-d50e-4632-9da7-508909429eef/kube-multus/0.log" Mar 18 13:39:29.519613 master-0 kubenswrapper[28504]: I0318 13:39:29.519558 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xpppb_46ae7b31-c91c-477e-a04a-a3a8541747be/kube-multus-additional-cni-plugins/0.log" Mar 18 13:39:29.537713 master-0 kubenswrapper[28504]: I0318 13:39:29.537647 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xpppb_46ae7b31-c91c-477e-a04a-a3a8541747be/egress-router-binary-copy/0.log" Mar 18 13:39:29.550922 master-0 kubenswrapper[28504]: I0318 13:39:29.550861 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xpppb_46ae7b31-c91c-477e-a04a-a3a8541747be/cni-plugins/0.log" Mar 18 13:39:29.565614 master-0 kubenswrapper[28504]: I0318 13:39:29.565561 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xpppb_46ae7b31-c91c-477e-a04a-a3a8541747be/bond-cni-plugin/0.log" Mar 18 13:39:29.583298 master-0 kubenswrapper[28504]: I0318 13:39:29.583220 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xpppb_46ae7b31-c91c-477e-a04a-a3a8541747be/routeoverride-cni/0.log" Mar 18 13:39:29.598614 master-0 kubenswrapper[28504]: I0318 13:39:29.598505 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xpppb_46ae7b31-c91c-477e-a04a-a3a8541747be/whereabouts-cni-bincopy/0.log" Mar 18 13:39:29.614473 master-0 kubenswrapper[28504]: I0318 13:39:29.614022 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xpppb_46ae7b31-c91c-477e-a04a-a3a8541747be/whereabouts-cni/0.log" Mar 18 13:39:29.656380 master-0 kubenswrapper[28504]: I0318 13:39:29.656296 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-58c9f8fc64-bnrjt_bc9af4af-fb39-4a51-83ae-dab3f1d159f2/multus-admission-controller/0.log" Mar 18 13:39:29.670330 master-0 kubenswrapper[28504]: I0318 13:39:29.670274 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-58c9f8fc64-bnrjt_bc9af4af-fb39-4a51-83ae-dab3f1d159f2/kube-rbac-proxy/0.log" Mar 18 13:39:29.699069 master-0 kubenswrapper[28504]: I0318 13:39:29.699027 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-kq2j4_5e691486-8540-4b79-8eed-b0fb829071db/network-metrics-daemon/0.log" Mar 18 13:39:29.712560 master-0 kubenswrapper[28504]: I0318 13:39:29.712502 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-kq2j4_5e691486-8540-4b79-8eed-b0fb829071db/kube-rbac-proxy/0.log" Mar 18 13:39:30.323585 master-0 kubenswrapper[28504]: I0318 13:39:30.323515 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_lvms-operator-6dbdc6c64-kjqlc_c3a74bb9-0939-4dd9-ad29-50ac6f179ee0/manager/0.log" Mar 18 13:39:30.348440 master-0 kubenswrapper[28504]: I0318 13:39:30.348317 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-p2zgz_d8d1ffef-c93c-4a17-a978-9d3dd6896ff2/vg-manager/1.log" Mar 18 13:39:30.351232 master-0 kubenswrapper[28504]: I0318 13:39:30.351190 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-p2zgz_d8d1ffef-c93c-4a17-a978-9d3dd6896ff2/vg-manager/0.log" Mar 18 13:39:30.910042 master-0 kubenswrapper[28504]: I0318 13:39:30.909955 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_f4d88fc1-4e92-432e-ac2c-e1c489b15e93/installer/0.log" Mar 18 13:39:30.931525 master-0 kubenswrapper[28504]: I0318 13:39:30.931240 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_89d262b4-b1a7-49b8-a8d2-1bb1ea671df8/installer/0.log" Mar 18 13:39:30.955510 master-0 kubenswrapper[28504]: I0318 13:39:30.955151 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_34b80036-6868-4e0b-9f3a-84c2817e566d/installer/0.log" Mar 18 13:39:31.128993 master-0 kubenswrapper[28504]: I0318 13:39:31.126930 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_9a1e88afeffbcb0115b3be33556cf14e/kube-controller-manager/0.log" Mar 18 13:39:31.175470 master-0 kubenswrapper[28504]: I0318 13:39:31.175351 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_9a1e88afeffbcb0115b3be33556cf14e/cluster-policy-controller/0.log" Mar 18 13:39:31.208962 master-0 kubenswrapper[28504]: I0318 13:39:31.202600 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_9a1e88afeffbcb0115b3be33556cf14e/kube-controller-manager-cert-syncer/0.log" Mar 18 13:39:31.224340 master-0 kubenswrapper[28504]: I0318 13:39:31.224295 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_9a1e88afeffbcb0115b3be33556cf14e/kube-controller-manager-recovery-controller/0.log" Mar 18 13:39:31.427435 master-0 kubenswrapper[28504]: I0318 13:39:31.427312 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-qsfzf_79aaf490-69d3-404d-9c69-e062717930a0/nmstate-console-plugin/0.log" Mar 18 13:39:31.443667 master-0 kubenswrapper[28504]: I0318 13:39:31.443597 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-pl77f_7da48df5-5ace-4bcb-a96f-a96bea9b7657/nmstate-handler/0.log" Mar 18 13:39:31.471957 master-0 kubenswrapper[28504]: I0318 13:39:31.471875 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-m55sr_3cdd455d-0f9a-4c4c-99d3-231f0dd90d04/nmstate-metrics/0.log" Mar 18 13:39:31.483705 master-0 kubenswrapper[28504]: I0318 13:39:31.483652 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-m55sr_3cdd455d-0f9a-4c4c-99d3-231f0dd90d04/kube-rbac-proxy/0.log" Mar 18 13:39:31.518021 master-0 kubenswrapper[28504]: I0318 13:39:31.517862 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-kx4rd_9b690cda-08f0-4606-a18f-a1be217b5037/nmstate-operator/0.log" Mar 18 13:39:31.538110 master-0 kubenswrapper[28504]: I0318 13:39:31.538047 28504 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-5snlm_ce064a48-4cf9-4160-82f0-307c9a64733b/nmstate-webhook/0.log"